Comment author: gwern 21 July 2012 10:26:51PM *  3 points [-]

Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning;

That seems unlikely. Leading both?

extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.

Mediocrity is sufficient to push them entirely out of the DK gap; your thinking DK applies is just another example of what I mean by these being fragile easily over-interpreted results.

(Besides blatant misapplication, please keep in mind that even if DK had been verified by meta-analysis of dozens of laboratory studies, which it has not, that still only gives a roughly 75% chance that the effect applies outside the lab.)

The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.

Without specifics, one cannot argue against that.

If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed - but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse.

So you're just engaged in reference class tennis. ('No, you're wrong because the right reference class is magicians!')

Comment author: JaneQ 22 July 2012 12:34:16PM *  -1 points [-]

What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn't be a vastly greater explanation for the SI behavior than it's mission statement taken at face value, even if we do not take into account SI's prior record.

So you're just engaged in reference class tennis. ('No, you're wrong because the right reference class is magicians!')

Reference class is not up for grabs. If you want narrower reference class you need to substantiate why it should be so narrow.

edit: Actually, sorry it comes as unnecessarily harsh. But do you recognize that SI genuinely has a huge credibility problem?

The donations to SI only make sense if we are to assume SI has extremely rare survival ability vs the technological risks. Low priors for extremely rare anything are a tautology, not an opinion. The lack of other alternatives is evidence against SI's cause.

Comment author: TheOtherDave 22 July 2012 08:52:43AM 2 points [-]

(shrug)

It seems to me that even if I ignore everything SI has to say about AI and existential risk and so on, ignore all the fear-mongering and etc., the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.

And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.

If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.

Comment author: JaneQ 22 July 2012 11:44:06AM *  3 points [-]

the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.

The prevalence of X is defined how?

And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.

In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item 'paperclip', and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it's understanding of the 'world' (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.

The practical issue is that the 'prevalence of some X' can not be specified without the model of the world; you can not have a function without specifying it's input domain, and the 'reality' is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.

If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.

Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.

Comment author: TheOtherDave 22 July 2012 07:35:31AM 2 points [-]

Mathematically any value that AI can calculate from external anything is a function of sensory input.

Sure, but the kind of function matters for our purposes. That is, there's a difference between an optimizing system that is designed to optimize for sensory input of a particular type, and a system that is designed to optimize for something that it currently treats sensory input of a particular type as evidence of, and that's a difference I care about if I want that system to maximize the "something" rather than just rewire its own perceptions.

Comment author: JaneQ 22 July 2012 08:22:06AM *  3 points [-]

Be specific as of what is the input domain of the 'function' in question.

And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the "somethings", followed by the notion that pretty much all "somethings" would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a normative system.

edit: To clarify, the intelligence is defined here as 'cross domain optimizer' that would therefore be able to maximize something vague without it having to be coherently defined. It is similar to knights of the round table worrying that the AI would literally search for holy grail, because to said knights, abstract and ill defined goal of holy grail appears entirely natural; meanwhile for systems more intelligent than said knights such a confused goal, due to it's incoherence, is impossible to define.

Comment author: DanielLC 22 July 2012 04:58:12AM 1 point [-]

It's valuing external reality. Valuing sensory inputs and mental models would just result in wireheading.

It would have a utility function, in which it assigns value to possible futures. It's not really a "goal" per se unless it's a satisficer. Otherwise, it's more of a general idea of what's better or worse. It would want to make as many paperclips as it can, rather than build a billion of them.

Comment author: JaneQ 22 July 2012 07:02:31AM *  2 points [-]

It's valuing external reality. Valuing sensory inputs and mental models would just result in wireheading.

Mathematically any value that AI can calculate from external anything is a function of sensory input.

'Vague' presumes the level of precision that is not present here. It is not even vague. It's incoherent.

Comment author: gwern 21 July 2012 05:58:13PM 3 points [-]

and are only offering risk reduction due to their incompetence combined with Dunning-Kruger effect.

You realize DK is a narrow effect which only obtains in certain conditions, is still controversial, and invoking it just makes you look like you'll grab at any thing at all no matter how dubious in order to attack SI, right? (About on the same level as 'Hitler was an atheist!')

It never happened in the history that the first people to take money for cure would be anything but either self deluded or confidence tricksters

Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and still stuck in the Dark Ages than the one we actually live in. I realize this may be a hard concept, but sometimes, the first person to tackle a problem - succeeds! In fact, sometimes multiple people tackling the problem all simultaneously succeed! (This is very common; called multiple discovery.)

Not every problem is as hard as fusion, or to put it another way, most hard problems are made of other, easier, problems, while if your hyperbolic statement were true, no progress would ever be made.

Comment author: JaneQ 21 July 2012 08:50:39PM 5 points [-]

Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning; extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.

Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and still stuck in the Dark Ages than the one we actually live in. I realize this may be a hard concept, but sometimes, the first person to tackle a problem - succeeds!

Ghmm. He said first people to take money, not first people to tackle.

The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.

If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed - but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse. If saving people from AI is an easy problem, then we'll survive without SI; if it's a hard problem, at any rate SI doesn't start with a letter from Einstein to the government, it starts with a person with no quantifiable accomplishments cleverly employing oneself. As far as I am concerned, there's literally no case for donations here; the donations happen via sort of decision noise similar to how NASA has spent millions on various antigravity devices, the power companies have spent millions on putting electrons in hydrogen orbitals at below ground level (see Mills hydrinos), and millions were invested in Steorn's magnetic engine.

[LINK] Inferring the rate of psychopathy from roadkill experiment

9 JaneQ 20 July 2012 08:10PM

Pardon the sensationalist headline of that article:

Mark says that "one thing that might explain the higher numbers here—in case people question my methods—is that I used a tarantula." Apparently, people seemed pretty eager about hitting a spider. "If you take that out it goes to 2.8% which is closer to the other turtle vs. snake studies I ended up finding."

It is still quite a surprisingly high number. At least compared to a 2008 study using the Psychopathy Checklist, which discovered that 1.2 percent of the US population were potential psychopaths. 1.2 vs 2.8 is a huge difference.

I was not aware of the other turtle and snake studies.

Note that with turtle this is the lower bound on percentage of evil; a perfectly amoral person that could e.g. kill for modest and unimportant sum of money or any other reason would still have no incentive to steer to drive over a turtle; and a significant percentage of people would simply fail to notice the turtle entirely.

This gives interesting prior for mental model of other people. Even at couple percent, psychopathy is much more common than notable intelligence or many other situations considered 'rare' or 'unlikely'. It appears to me that due to the politeness and the necessary good-until-proven-evil strategy, many people act as if they have an incredibly low prior for psychopathy, which permits easy exploitation by psychopaths. There may also be signaling reasons for pretending to have very low prior for psychopathy as one of the groups of people with high prior for psychopathy is psychopaths themselves; pretending easily becomes too natural, though.

Perhaps adjusting the priors could improve personal safety and robustness with regards to various forms of exploitation, whenever the priors are set incorrectly.

Comment author: David_Gerard 19 July 2012 11:44:16AM 1 point [-]

Note that that second paragraph is one of Holden Karnofsky's objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.

Comment author: JaneQ 20 July 2012 07:15:07AM *  5 points [-]

Yes. I am sure Holden is being very polite, which is generally good but I've been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The 'resistance to feedback' is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won't pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.

Comment author: Viliam_Bur 18 July 2012 05:36:26PM 1 point [-]

Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it's not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real).

As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as "a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise"), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.

Comment author: JaneQ 19 July 2012 07:54:59AM 3 points [-]

SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.

With regards to the 'message', i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to "rationally discussing", what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it's first 10 years and first over 2 millions dollars in other people's money.

Comment author: ciphergoth 18 July 2012 03:34:24PM 0 points [-]

I'm afraid I'm not getting your meaning. Could you fill out what corresponds to what in the analogy? What are all the other eggs? In what way do they look good compared to SI?

Comment author: JaneQ 18 July 2012 05:02:57PM 4 points [-]

All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal's original wager, the Thor and other deities are to be ignored by omission.

On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.

On the rationality movement, here's a quote from Holden.

Apparent poorly grounded belief in SI's superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.

Yet I'm not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.

Comment author: ciphergoth 18 July 2012 02:59:08PM *  0 points [-]

I think your estimate of their chances of success is low. But even given that estimate, I don't think it's Pascalian. To me, it's Pascalian when you say "my model says the chances of this are zero, but I have to give it non-zero odds because there may be an unknown failing in my model". I think Heaven and Hell are actually impossible, I'm just not 100% confident of that. By contrast, it would be a bit odd if your model of the world said "there is this risk to us all, but the odds of a group of people causing a change that averts that risk are actually zero".

Comment author: JaneQ 18 July 2012 03:06:31PM *  3 points [-]

It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.

View more: Prev | Next