Comment author: OrphanWilde 14 August 2012 12:33:17PM 0 points [-]

Before you build a new crop of them, first you should probably make sure society is even listening to its Einsteins and Feynmans, or that the ones you have are even interested in solving these problems. It does no good to create a crop of supergeniuses who aren't interested in solving your problems for you and wouldn't be listened to if they did.

Comment author: nykos 14 August 2012 07:26:46PM -3 points [-]

I upvoted you for responding with a refutation and not simply downvoting.

Comment author: nykos 14 August 2012 09:00:30AM *  -6 points [-]

The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.

Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that's the case.

*1. We humans don't have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.

Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template - instead of some stupid or deranged human), and he notably hasn't been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and... the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It's in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.

So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.

*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence - since intelligent people generally have less children than those on the left half of the Bell curve - while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn't be rich).

Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn't even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What's worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.

The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.

Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for "genius" embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).

I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).

EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn't fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.

Comment author: nykos 14 August 2012 07:16:43PM -6 points [-]

OK, I got two minuses already, can't say I'm surprised because what I wrote is not politically correct, and probably some of you thought that I broke the "politics is the mind-killer" informal rule (which is not really rational if you happen to believe that the default political position - the one most likely to pass under the radar as non-mindkilling - is not static, but in fact is constantly shifting, usually in a leftwards direction).

For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.

The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges LemaƮtre). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).

If you downvote me, I have one request: I would at least like to read why.

Comment author: Luke_A_Somers 14 August 2012 02:13:50PM 1 point [-]

A Feynman raised by an 80 IQ mother... wouldn't be Feynman

Comment author: nykos 14 August 2012 06:35:56PM *  -2 points [-]

I concede that, under some really extreme environmental conditions, any genetic advantages would be canceled out. So, you might actually be right if the IQ 80 mother is really bad. Money should be provided to poor families by the state, but only as long as they raise their child well - as determined by periodic medical checks. Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.

But I believe you are taking the importance of parenthood way too far, and disregarding the hereditarian point of view too easily. The blank-slate bias is something to be avoided. I would suggest you read this article by Matt Ridley.

Excerpt:

Today, a third of a century after the study began and with other studies of reunited twins having reached the same conclusion, the numbers are striking. Monozygotic twins raised apart are more similar in IQ (74%) than dizygotic (fraternal) twins raised together (60%) and much more than parent-children pairs (42%); half-siblings (31%); adoptive siblings (29%-34%); virtual twins, or similarly aged but unrelated children raised together (28%); adoptive parent-child pairs (19%) and cousins (15%). Nothing but genes can explain this hierarchy.

Comment author: OrphanWilde 14 August 2012 12:33:17PM 0 points [-]

Before you build a new crop of them, first you should probably make sure society is even listening to its Einsteins and Feynmans, or that the ones you have are even interested in solving these problems. It does no good to create a crop of supergeniuses who aren't interested in solving your problems for you and wouldn't be listened to if they did.

Comment author: nykos 14 August 2012 06:16:34PM *  0 points [-]

The society will be listening to its Einsteins and Feynmans once they band together and figure out how to use the dark arts to take control of the mass-media and universities away from their present owners and use them for their own, more enlightened goals. Or at least ingratiate themselves before the current rulers. They could promise to build new bombs or drones, for example. As for not being interested in solving FAI and these kinds of problems, that's really not a very convincing argument IMO. Throughout history, in societies of high average IQ and a culture tolerant of science, there was never a shortage of people curious about the world. Why wouldn't people with stratospheric IQ be curious about the world and enjoy the challenge of science, especially if they live in a brain-dead society which routinely engages in easy and boring trivialities? I mean, what would you choose between working on FAI or watching the Kardashians? I know what I would, even though my IQ is not very much above average and I'm really bad at probability problems.

There will never be a shortage of nerds and Asperger types out there, at least not for a long time, even with the current dysgenic trends.

Comment author: nykos 14 August 2012 09:00:30AM *  -6 points [-]

The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.

Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that's the case.

*1. We humans don't have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.

Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template - instead of some stupid or deranged human), and he notably hasn't been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and... the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It's in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.

So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.

*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence - since intelligent people generally have less children than those on the left half of the Bell curve - while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn't be rich).

Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn't even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What's worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.

The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.

Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for "genius" embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).

I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).

EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn't fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.

Comment author: MileyCyrus 24 July 2012 03:06:43AM 12 points [-]

It's quicker to recruit existing people and turn them into rationalists than to create new people from scratch. This approach will eventually exhaust the gene pool, but not for hundreds of generations.

Comment author: nykos 30 July 2012 05:24:53PM 1 point [-]

Good luck explaining Bayes' law to people with IQs below 90.

Comment author: shminux 24 July 2012 03:13:58AM 4 points [-]

Only assuming that rationalism is inheritable, which is not at all obvious.

Comment author: nykos 30 July 2012 05:15:45PM 2 points [-]

Rationalism may not be heritable, but intelligence surely is.

Let's face it, LessWrong and rationalism in general appeal mostly to people with at least 1 SD above average IQ.

Comment author: nykos 03 June 2012 12:37:52PM -1 points [-]

Given that the burden of proof regarding the equality of intelligence of human populations that have evolved in reproductive isolation from each other for thousands, if not tens of thousands, of years, and in radically different environments (of varying survival difficulty), lies with the egalitarians claiming that all human populations have the same intelligence distribution - I'd say that this article doesn't even belong on LessWrong.

What we need instead is either: a) An article explaining natural selection to those who don't understand where people who don't believe in human neurological uniformity are coming from; b) An article that proves that ALL biomes on planet Earth have had the exact same selection pressures for intelligence in modern H.sapiens throughout the past 100,000 years. Furthermore, unless you have a belief that Homo sapiens, Homo neanderthalensis and Denisovans had the exact same intelligence distribution, this article must prove that the 2-3% Neanderthal admixture in all non-Africans and the 5% Denisovan admixture in some Oceanians is not related to brain function and intelligence.

Sadly, we live in a world where human neurological uniformity is the null hypothesis even for people who should know better, given knowledge of evolution by natural selection.

Comment author: [deleted] 01 May 2012 01:06:48PM *  42 points [-]

For example, in many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army.

--Mencius Moldbug, on belief as attire and conspicuous wrongness.

Source.

In response to comment by [deleted] on Rationality Quotes May 2012
Comment author: nykos 03 May 2012 12:21:41PM *  4 points [-]

More quotes by Mencius Moldbug:

When they say things like "in cognitive science, Bayesian reasoner is the technically precise codeword that we use to mean rational mind," they really do mean it. Move over, Aristotle!

Of course, in Catholicism, Catholic is the technically precise codeword that they use to mean rational mind. I am not a Catholic or even a Christian, but frankly, I think that if I had to vote for a dictator of the world and the only information I had was whether the candidate was an orthodox Bayesian or an orthodox Catholic, I'd go with the latter.

The only problem is that this little formula is not a complete, drop-in replacement for your brain. If a reservationist is skeptical of anything on God's green earth, it's people who want to replace his (or her) brain with a formula.

To make this more concrete, let's look at how fragile Bayesian inference is in the presence of an attacker who's filtering our event stream. By throwing off P(B), any undetected pattern of correlation can completely foul the whole system. If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian "rational mind" will conclude that the urn is entirely full of blue balls. And Bayesian inference certainly does not offer any suggestion that you should look at who's pulling balls out of the urn and see what he has up his sleeves. Once again, the problem is not that Bayesianism is untrue. The problem is that the human brain has a very limited capacity for analytic reasoning to begin with.

They are all from the article A Reservationist Epistemology

Comment author: [deleted] 02 May 2012 06:34:09AM *  5 points [-]

A man can dream can't he? Note he isn't advocating nonsense as an organizing tool, much of his wackier thought is precisely around trying to make an organizing tool work as good as nonsense does. Unfortunately I don't think he has succeed since in my opinion neocameralism is unlikely to be implemented and likely to blow up if someone did implement it.

In response to comment by [deleted] on Rationality Quotes May 2012
Comment author: nykos 03 May 2012 11:48:47AM *  5 points [-]

Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug's diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.

One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don't have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.

View more: Prev | Next