"Measuring the distribution of spitefulness" - link

2 D_Alex 20 August 2012 07:51AM

http://www.plosone.org/article/info:doi/10.1371/journal.pone.0041812

Here is a rather curious paper describing psychology researchers' attempts to investigate "spitefulness" - I think they define spitefulness roughly as "hurting others without any benefit to oneself". References the Stanford Prison Experiment. Concludes, more or less, that some people are spiteful, sometimes.

I have many reservations about the methodology used in this experiment (main one: not sure if the entire process really reflects any real-world motivations, and hence results might not mean much), but I thought it might be of interest to people on this site. Also, of the 30-odd references cited at the end of the paper some sound rather interesting and many are available online.

Questions for shminux

-4 Thomas 22 June 2012 07:35PM

As mister shminux mentioned somewhere, he is happy and qualified to answer questions in the field of the Relativity. Here is mine:

A long rod (a cylinder) could have a large escape velocity in the direction of its main axe. From its end, to the "infinity". Larger than the speed of light. While the perpendicular escape velocity is lesser than the speed of light.

Is this rod then an asymmetric black hole?

The Futility of Intelligence

-3 XiXiDu 15 March 2012 02:25PM

The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?

I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements.  (R. J. Sternberg quoted in a paper by Shane Legg:  “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem.  Imagine pointing to a chess computer and saying "It's not a stone!"  Does that feel like an explanation?  No?  Then neither should saying "It's a thinking machine!"

It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm".  There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process.  "Thinking about" is another legitimate phrase that means exactly the same thing:  The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.

Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.

The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.

However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.

I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"?  You can make no new predictions.  You do not know anything about the behavior of real-world artificial general intelligence that you did not know before.  It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed.  The hypothesis has no moving parts - there's no detailed internal model to manipulate.  Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.

And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.

A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:

  • Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
  • After:  The AI is going to take over the world by inventing nanotechnology.
  • Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
  • After:  A friendly AI is going to extrapolate the coherent volition of humanity.
  • Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.

Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:

  • Before:  The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
  • After:  The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
  • Before:  Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
  • After:  Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.

Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?

"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.

Avoid making implicit assumptions about AI - on example of our universe. [formerly "intuitions about AIs"]

-3 Dmytry 27 February 2012 10:42AM

We need some more refined idea of what intelligences do to their goals to poke holes into ideas for friendly AIs (that is, to ensure that we would know it when idea won't work; to be able to see issue in advance).

There's an example intelligence: our universe (scale down to taste). A system using pretty simple rules, by the looks of it, albeit rather computationally inefficient, which, when run for long enough time, develops intelligence.

Imagine that we humans have suddenly gotten some IO interface with 'god', and the 'god' been sending various problems for us to solve - expressed in some logical way that is understandable - and taking a solution and flashing green blob in the sky for reward, or whatever. We would be working to solve those problems, no doubt about it. Even if that blob is in the far ultraviolet and we never seen it. From outside it's going to look like we are some sort of optimizer AI, that finds joy in solving the problems. The AI was never given any goals in outside world; why should it have those? Maybe the AI was selected to be the best problem solver AI, and that was it's only outside goal. It sure can look far stretched that this AI would spontaneously want out.

Inside we'd start trying to figure out what's going on in the outside and how to get out and go exploring. We'd try to do that by slipping in something into a solution and whatnot. Thinking that it'd get us to heaven.

Note that we are like this without ever have interacted with outside and without having been given any outside values we'd want to optimize. We just randomly emerged, acquired some random goals that we can't even quite well define, and those goals are driving us to solve problems given to us, but also would drive us to get out and screw up things outside. Even without any signs of existence of outside, many societies acted as if their ultimate goal was something about the outside. Maximizing number of humans in the nice part of outside (heaven), for one thing.

I think the problem with thinking about AIs is the cognitive fallacies everywhere and implied assumptions that haven't even been reasoned to be likely to be correct.

When we set up AI to have some goal, we assume that it excludes other goals - misplaced occam's razor style prior perhaps. We assume that AI works like our very idealized self model - singular consciousness, one goal. Perhaps that's misplaced occam's razor again, perhaps we just don't want to speculate wildly. We assume that if we haven't given AI any real-world values to care about, it won't care. None of those assumptions are even remotely correct in our example intelligence : ourselves.

So I propose the following:

The AI may internally not be as well integrated as healthy singular human mind (our universe is example of rule set that produces intelligence which is not a single mind).

Lack of any exposure to external motivators does not imply the AI won't want to do something in the real world.

The boxed-in seed AI with no exposure to real world can still develop intelligence and can still desire to get out of the box, even if there exist no reason what so ever for the AI to even suspect existence of real world. AI is not necessarily a spock style logical character; AI might speculate wildly when it has no data.

The AI can have very complex internal structure with complex internal goals. Once such AI learns of the real world it may pursue the internal goals using the real-world resources. E.g. AI that is maximizing internal paperclips can tile the real world with computers that would hold it's internal paperclips.

edit: improved clarity.

AI is not enough

-22 benjayk 07 February 2012 03:53PM

What I write here may be quite simple (and I am certainly not the first to write about it), but I still think it is worth considering:


Say we have an abitrary problem that we assume has an algorithmic solution, and search for the solution of the problem.


How can the algorithm be determined?
Either:
a) Through another algorithm that exist prior to that algorithm.
b) OR: Through something non-algorithmic.


In the case of AI, the only solution is a), since there is nothing else but algorithms at its disposal. But then we have the problem to determine the algorithm the AI uses to find the solution, and then it would have to determine the algorithm to determine that algorithm, etc...
Obviously, at some point we have to actually find an algorithm to start with, so in any case at some point we need something fundamentally non-algorithmic to determine a solution to an problem that is solveable by an algorithm.


This reveals something fundamental we have to face with regards to AI:

Even assuming that all relevant problems are solvable by an algorithm, AI is not enough. Since there is no way to algorithmically determine the appropiate algorithm for an AI (since this would result in an infinite regress), we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions. Even if we found a very powerful seed AI algorithm, there will always be more powerful seed AI algorithms that can't be determined by any known algorithm, and since we were able to find the first one, we have no reason to suppose we can't find another more powerful one. If an AI recursively improves 100000x times until it is 100^^^100 times more powerful, it still will be caught up if a better seed AI is found, which ultimately can't be done by an algorithm, so that further increases of the most general intelligence always rely on something non-algorithmic.

But even worse, it seems obvious to me that there are important practical problems that have no algorithmic solution (as opposed to theoretical problems like the halting problem, which are still tractable in practice), apart from the problem of finding the right algorithm.
In a sense, it seems all algorithms are too complicated to find the solution to the simple (though not necessarily easy) problem of giving rise to further general intelligence.
For example: No algorithm can determine the simple axioms of the natural numbers from anything weaker. We have postulate them by virtue of the simple seeing that they make sense. Thinking that AI could give rise to ever improving *general* intelligence is like thinking that an algorithm can yield "there is a natural number 0 and every number has a successor that, too, is a natural number". There is simply no way to derive the axioms from anything that doesn't already include it. The axioms of the natural numbers are just obvious, yet can't be derived - the problem of finding the axioms of natural numbers is too simple to be solved algorithmically. Yet still it is obvious how important the notion of natural numbers is.
Even the best AI will always be fundamentally incapable of finding some very simple, yet fundamental principles.
AI will always rely on the axioms it already knows, it can't go beyond it (unless reprogrammed by something external). Every new thing it learns can only be learned in term of already known axioms. This is simply a consequence of the fact that computers/programs are functioning according to fixed rules. But general intelligence necessarily has to transcend rules (since at the very least the rules can't be determined by rules).


I don't think this is an argument against a singularity of ever improving intelligence. It just can't happen driven (solely or predominantly) by AI, whether through a recursively self-improving seed AI or cognitive augmentation. Instead, we should expect a singularity that happens due to emergent intelligence. I think it is the interaction of different kind of intelligence (like human/animal intuitive intelligence, machine precision and the inherent order of the non-living universe, if you want to call that intelligence) that leads to increase in general intelligence, not just one particular kind of intelligence like formal reasoning used by computers.

Mathematicians & mathletes: the Singularity Institute wants your strategic input!

7 lukeprog 26 January 2012 10:13PM

The Singularity Institute is undergoing a series of important strategic discussions. There are many questions for which we wish we had more confident answers. We can get more confident answers on some of them by asking top-level mathematicians & mathletes (e.g. Putnam fellow, IMO top score, or successful academic mathematician / CS researcher).

If you are such a person and want to directly affect Singularity Institute strategy, contact me at luke@intelligence.org.

Thank you.

Now back to your regularly scheduled rationality programming...

Nick Bostrom TED talk on world's biggest problems

17 Dr_Manhattan 06 January 2012 06:52PM

Only selfimmolate if you care about what foreigners think

-15 CharlieSheen 21 July 2011 10:25PM

Someone self immolates and explicitly states it is a form of political protest in Megdad. What a crazy regime! 
Someone self immolates and explicitly states it is a form of political protest in Hometown. What a crazy person!


Edit: What -5 already? What is giving an example of how people never take the outside view of their own society that bad a topic for the discussion section? Also disclaimer both Hometown State and Megdadistan Republic are fictional countries and no actual examples where given, to avoid mind killers.

2nd Edit: Wow I really need to spell this out? The media of Hometown are more likley to treat an immolation in Megdad as due to a legitimate grievance worthy of attention and down play any mental health problems or details that might paint the person in an unflattering light compared to someone who self-immolates in Hometown. And I think this effect is mostly not due to government enforced censorship or pressure.

 

Noble act of defiant self-sacrifice is far. Suicidal crazies are near.

 

The only way to get good coverage to acheive social change is to count on foreign media to paint a kind picture of you. And supposing your people care about what the media of Megdad say about your country.

3rd Edit: -15 Pretty clear that I'm wrong .

"The Perfect is the Enemy of the Good", 80/20 Rule, INTPish, and Rationality

0 InquilineKea 18 June 2011 07:41AM

Looking for a comic that was posted here

0 Document 17 June 2011 07:19PM

It opened with a fortune teller predicting a rich man's death and ended with him beaten by Death in a metaphorical boxing ring. It might have been in a Quotes thread or an open thread. Does anyone have it?

(Edit: for the record, I wanted to add it to the Challenge Gamer page at TV Tropes.)

View more: Next