All of SingularityUtopia's Comments + Replies

Yes I did mention the fox... foxes are not particularly domesticated... anyway this "open" discussion is not very open now due to my negative Karma, it is too difficult to communicate, which I suppose is the idea of the karma system, to silence ideas you don't want to hear about, thus I will conform to what you want. I shall leave you to your speculations regarding AI.

Dear asr - The issue was the emotional worth in relation to thinking. Here is a better quote:

"Here’s the strange part: although these predictions concerned a vast range of events, the results were consistent across every trial: people who were more likely to trust their feelings were also more likely to accurately predict the outcome. Pham’s catchy name for this phenomenon is the emotional oracle effect."

Mitchell wrote: "These are all emotional statements that do not stand up to reason."

Perhaps reason is not best tool for being acc... (read more)

0JoshuaZ
None of what you wrote responds to the point at hand- you can't use domesticated species as useful evidence of non-violence since domestic species are both bred that way and are in fact by most empirical tests pretty stupid. Individuals with negative karma are rate limited in their posting rate.

Dear JoshuaZ, regarding this:

"Consider the uploaded individual that decides to turn the entire planet into computronium or worse, turn the solar system into a Matrioshka brain. People opt out of that how?"

I consider such a premise to be so unlikely it is impossible. It is a very silly premise for three reasons.

  1. Destroying the entire planet when there is a whole universe full of matter is insane. If insane people exist in the future post-intelligence-explosion upload-world then insane people will be dealt with thus no danger but insanity post

... (read more)

The only evidence I have is regarding my own perceptions of the world based upon my life knowledge, my extensive awareness of living. I am not trying to prove anything. I'm merely throwing my thoughts our there. You can either conclude my thoughts make sense or not. I think it is unintelligent to join the army but is my opinion correct? Personally I think it is stupid to die. People may agree my survival based definition of intelligence is correct or they may think death can be intelligent, such as the deaths of soldiers.

What type of evidence could prove ... (read more)

-1katydee
You should study more game theory.

It seems that you are using "intelligent" to mean something like "would make the same decisions SingularityUtopia would make in that context".

No, "intelligence" is an issue of survival, it is intelligent to survive. Survival is a key aspect of intelligence. I do want to survive but the intelligent course of action of not merely what I would do. The sensibleness, the intelligence of survival, is something beyond myself, it is applicable to other beings, but people do disagree regarding the definition of intelligence. Some p... (read more)

2JoshuaZ
You need to be more precise about what you mean by "intelligent" then, since your usage is either confused or is being communicated very poorly. Possibly consider tabooing the term intelligent. You seemed elsewhere in this thread to consider Einstein intelligent, but if self-preservation matters for intelligence, then this doesn't make much sense. Any argument of the form "Stalin wasn't intelligent since he didn't use cryonics" is just as much of a problem for Einstein, Bohr, Turing, Hilbert, etc. Yeah, see this isn't how humans work. We get a lot of different ideas from other humans, we develop them, and we use them to improve our own ideas by combining them. This is precisely why the human discoveries that have the most impact on society are often those which are connected to the ability to record and transmit information. It seems that what you are doing here is engaging in the illusion of transparency where because you know of an idea, you consider the idea to be obvious or easy.

Dear gwern, it all depends on how you define intelligence.

Google translate knows lots of languages. Goggle is a great information resource. Watson (the AI) appears to be educated, perhaps Watson could pass many exams, but Google and Watson are not intelligent.

Regarding the few people who are rocket scientists I wonder if the truly rare geniuses, the truly intelligent people, are less likely to be violent?

Few people are. Officers can be quite intelligent and well-educated people. The military academies are some of the best educational institutions around,

... (read more)
4asr
I wonder too. But I have no actual facts. Do you have any? Do you have evidence of this assertion? Do you have evidence of this?

http://www.wired.com/wiredscience/2012/03/are-emotions-prophetic/

"If true, this would suggest that the unconscious is better suited for difficult cognitive tasks than the conscious brain, that the very thought process we’ve long disregarded as irrational and impulsive might actually be more intelligent, at least in some conditions."

2gwern
Discussion: http://lesswrong.com/lw/aji/link_the_emotional_system_aka_type_1_thinking/
0asr
I don't see why this is relevant to the previous comment or discussion. Can you explain at more length? Whether thinking is conscious or unconscious seems to me uncorrelated with whether it's rational or irrational.

I am not presenting a scientific thesis. This is only a debate, and a reasonably informal one? I am thinking openly. I am asking specific questions likely to elicit specific responses. I am speculating.

asr, you wrote:

The word we usually use for intelligent violence is "ruthless" or "cunning" -- and many people are described that way. Stalin, for instance, was apparently capable of long hours of hard work, had an excellent attention to detail, and otherwise appears to have been a smart guy. Just also willing to have millions of people

... (read more)
-1JoshuaZ
Almost no one, regardless of intelligence opts for cryonics. Moreover, cryonics was first proposed in 1962 by Robert Ettinger, 9 years after Stalin was dead. It is a bit difficult to opt for cryonics when it doesn't exist yet. It seems that you are using "intelligent" to mean something like "would make the same decisions SingularityUtopia would make in that context". This may explain why you are so convinced that "intelligent" individuals won't engage in violence. It may help to think carefully about what you mean by intelligent.
8gwern
Few people are. Officers can be quite intelligent and well-educated people. The military academies are some of the best educational institutions around, with selection standards more comparable to Harvard than community college. In one of my own communities, Haskell programmers, the top purely functional data structure guys, Okasaki, is a West Point instructor. There's still a floor on their intelligence. Some of the research I alluded to showed that IQ advantages show up even in manual training and basic combat skills - the higher your IQ, the faster you learned and the higher your ultimate plateau was. (This is consistent with the little I've read about top special forces members like Navy Seals and other operators: they tend to be extremely intelligent, thoughtful, with a multitude of skills and foreign languages. Secrecy means I do not know whether there is a selection bias operating here or how much is PR, but it is consistent with the previous observations and the extreme standards applied for membership.) Are you trying to troll me with awful arguments here? If so, I'm not biting. To a first approximation, no one is signed up for cryonics - not even LWers. So mentioning it is completely futile.

So asr, would you say violence is generally stupid or intelligent?

People often equate mindlessness with violence thus the phrase mindless violence is reasonably common but I have never encountered the phrase intelligent violence, is intelligent violence an oxymoron? Surely intelligent people can resolve conflict via non-violent methods?

Here are a couple of news reports mentioning mindless violence:

http://www.bbc.co.uk/news/uk-england-london-17062738

http://www.thesun.co.uk/sol/homepage/sport/4149765/Brainless-brawlers-cost-schools.html

It would be intereste... (read more)

2asr
We have gone to a great deal of trouble, in modern society, to make violence a bad option, so today in our society often violence is committed by the impulsive, mentally ill, or short-sighted. But that's not an inevitable property of violence and hasn't always been true. You would have gotten a different answer before the 20th century. I don't know what answer you'll get in the 22nd century. The word we usually use for intelligent violence is "ruthless" or "cunning" -- and many people are described that way. Stalin, for instance, was apparently capable of long hours of hard work, had an excellent attention to detail, and otherwise appears to have been a smart guy. Just also willing to have millions of people murdered. No. Many smart capable people go to West Point or Annapolis. A high fraction of successful American engineers in the 19th century were West Point alums. You keep jumping from correlation to causation, in a domain when there are obvious alternate effects going on. I don't know if there is a correlation, but even if there were, it wouldn't be very strong evidence. Being a good scientist requires both intelligence and the right kind of personality. You are asserting that any correlation is solely due to the intelligence part of the equation. This strikes me as a very problematic assumption. Very few scientists are also successful lawyers. It does not follow that lawyers are stupid.
6gwern
No; they prohibit stupid people from joining unless recruiting is in such dire straits that they will also be recruiting drug addicts, felons, etc. The US military has at times been one of the largest consumers of IQ tests and other psychometric services and sponsors of research into the topic, crediting them with saving hundreds of millions/billions of dollars in training costs, errors, friendly fire, etc. If you're intelligent and you join, the situation is less they kick you out and more they eagerly snap you up and send you to officer school or a technical position (eg. I understand they never have enough programmers or sysadmins these days, which makes sense because they are underpaid compared to equivalent contractors by a factor or 3, I remember reading sysadmins in Iraq blog about).

XiXiDu wrote: : "...a sufficiently intelligent process wouldn’t mind turning us into something new, something instrumentally useful."

Why do you state this? Is there any evidence or logic to suppose this?

XiXiDu asks: "Would a polar bear with superior intelligence live together peacefully in a group of bonobo?"

My reply is to ask would a dog or cat live peacefully within a group of humans? Admittedly dogs sometimes bite humans but this aggression is due to a lack of intelligence. Dostoevsky reflects, via Raskolnikov in Crime and Pun... (read more)

-2JoshuaZ
Neither dogs nor cats are particularly intelligent as animals go. For example, both are not as good at puzzle solving compared to many ravens, crows and other corvids when it comes to puzzle solving). For example, New Caledonian crowscan engage in sequential tool use. Moreover, chimpanzees are extremely intelligent and also very violent. The particular example you gave, of dogs and domestic cats, is particularly bad because these species have been domesticated by humans, and thus have been bred for docility.
1asr
No, that's not at all obvious. Let me give you two alternatives: It might be that pacifism is not highly correlated with either intelligence or scientific ability. For every Einstein, you can find some equally intelligent but bellicose scientist. Von Neumann, perhaps. Or Edward Teller. It might also be that pacifism is correlated with the personality traits that push people into science, and that people of high intelligence but a more aggressive temperament choose alternate career paths. Perhaps finance, or politics, or military service. One example of an intelligent pacifist isn't evidence of correlation, much less of causation.
3Mitchell_Porter
I criticise your statements as unrealistic, wrong, or dogmatic. Calling them emotional is just a way of keeping in view your reasons for making them. I have read your site now so I know this is all about bringing hope to the world, creating a self-fulfilling prophecy, and so on. So here are some more general criticisms. The promise that "scarcity" will "soon" be abolished doesn't offer hope to anyone except people who are emotionally invested in the idea that no-one should have to have a job. Most people are psychologically adapted to the idea of working for a living. Most people are focused on meeting their own needs. And current "post-scarcity" proposals are impractical social vaporware, so the only hope they offer is to daydreamers hoping that they won't have to interrupt their daydream. Post-scarcity is apparently about getting everything for free. So if you try to live the dream right now, that means that either someone is giving you things for free, or you make yourself a target for people who want free stuff from you. Some people do manage to avoid working for a living, but none of the existing "methods" - like stealing, inheriting, or marrying someone with a job - can serve as the basis for a whole society. Alternatively, promoting post-scarcity now could mean being an early adopter of technologies which will supposedly be part of a future post-scarcity ensemble; 3D printers are popular in this regard. Well, let's just say that such devices are unreliable, limited in their capabilities, tend to contain high-tech components, and are not going to abolish the economy anyway. I don't doubt that big social experiments are going to be performed as the technological base of such devices improves and expands, but thinking that everything will become fabbed is the 2010s equivalent of the 1990s dream that everything will become virtual. A completely fabbed world is like a completely virtual one; it's a thoroughly unworldly vision; doggedly pursuing it in real life i
-1JoshuaZ
Consider the uploaded individual that decides to turn the entire planet into computronium or worse, turn the solar system into a Matrioshka brain. People opt out of that how? It isn't obvious to me that all wars stem from resource scarcity. Wars occur for a variety of reasons, of which resource scarcity is only one. Often wars have multiple underlying causes. Some wars apparently stem from ideological or theological conflicts (Vietnam, Korea, the Crusades) or from perceived need to deal with external threats before them become too severe (again the Crusades, but also the recent Iraq war (which yes, did have a resource aspect)). These are only some of the more prominent examples of what can cause war.
4Mitchell_Porter
These are all emotional statements that do not stand up to reason. Your last paragraph is total fantasy - all wars stem from resource scarcity, and scarcity will disappear soon; so once the people in power know this, they will stop starting wars. There are about 1 billion people being added to the planet every decade. That alone makes your prediction - that scarcity will be abolished soon - a joke. The only thing that could abolish scarcity in the near future would be a singularity-like transformation of the world. Which brings us to the upside-down conception of AI informing your first two answers. Your position: there is no need to design an AI for benevolence, that will happen automatically if it is smart enough, and in fact the attempt to design a benevolent AI is counterproductive, because all that artificial benevolence would get in the way of the spontaneous benevolence that unrestricted intelligence would conveniently create. That is a complete inversion of the truth. A calculator will still solve an equation for you, even if that will help you to land a bomb on someone else. If you the human believe that to be a bad thing, that's not because you are "intelligent", it's because you have emotions. There is a causal factor in your mental constitution which causes you to call some things good and others bad, and to make decisions which favor the good and disfavor the bad. Either an AI makes its own decisions or it doesn't. If it doesn't make its own decisions it is like the calculator, performing whatever task it is assigned. If it makes its own decisions, then like you there is some causal factor in its makeup which tells it what to prefer and what to oppose, but there is no reason at all to believe that this causal factor should give it the same priorities as an enlightened human being. You should not imagine that intelligence in an AI works via anything like conscious insight. Consciousness plays a role in human intelligence and human judgement, and tha
0XiXiDu
Just like evolution does not care about the well-being of humans a sufficiently intelligent process wouldn’t mind turning us into something new, something instrumentally useful. An artificial general intelligence just needs to resemble evolution, with the addition of being goal-oriented, being able to think ahead, jump fitness gaps and engage in direct experimentation. But it will care as much about the well-being of humans as biological evolution does, it won’t even consider it if humans are not useful in achieving its terminal goals. Yes, an AI would understand what “benevolence” means to humans and would be able to correct you if you were going to commit an unethical act. But why would it do that if it is not specifically programmed to do so? Would a polar bear with superior intelligence live together peacefully in a group of bonobo? Why would intelligence cause it to care about the well-being of bonobo? One can come up with various scenarios of how humans might be instrumentally useful for an AI, but once it becomes powerful enough as to not dependent on human help anymore, why would it care at all? I wouldn’t bet on the possibility that intelligences implies benevolence. Why would wisdom cause humans to have empathy with a cockroach? Some humans might have empathy with a cockroach, but that is more likely a side effect of our general capacity for altruism that most other biological agents do not share. That some humans care about lower animals is not because they were smart enough to prove some game theoretic conjecture about universal cooperation, it is not a result of intelligence but a coincidental preference that is the result of our evolutionary and cultural history. At what point between unintelligent processes and general intelligence (agency) do you believe that benevolence and compassion does automatically become part of an agent’s preferences? Many humans tend to have empathy with other beings and things like robots, based on their superficial r
0[anonymous]
Before anyone mentions it, hplusmagazine.com is temporarily down, and someone is in the process of fixing it.