Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.
This is also interesting: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity
Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.
JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.
Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!
ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.
I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.
I think we discussed this previously on LW. In general the argument isn't convincing in his case.
Gilead made 20$ billion with a drug that cures one virus. If a pharma company would think that his approach has a 10% of working to cure all viruses spending 100$ million or more would be very interesting for traditional pharma companies under the current incentive scheme.
"Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility." -Wikipedia
The very next sentence starts with "Utility is defined in various ways..." It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as "the greatest good for the greatest number" but the clutch is in the word "good" which is left undefined. This is as opposed to, say, virtue ethics which doesn't care per se about the consequences of actions.
I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.
There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.
These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.
I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.
These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.
The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.
With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.
And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.
Yes, those with my values will live here, in Gondor. Your folks can live other there, in Mordor. Our citizens will no longer come into contact and conflict with one another, and peace will reign forever.
What, these segregated regions THEMSELVES come into conflict? Absurd. What would you even call a conflict that was between large groups of people? That could never happen. Everyone who shares my value system knows that lots of people would die, and we all agree that nothing could be worth that.
https://www.quora.com/How-can-I-get-Wi-Fi-for-free-at-a-hotel/answer/Yishan-Wong
Want free wifi when staying at an hotel? Ask for it. Of course!, Duh, seems so obvious now that I think about it.
She could read "The Basic AI Drives" to him at night.
Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.
- Some can't survive on less or have other obligations that looks like charity (child support)
- We would have less initiative to earn more
- It would hurt our economy, as it is consumer driven. We must buy Iphones
- I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
- I pay taxes and it is like charity.
- I know better how to spent money on my needs.
- Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
- If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
- If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
- Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
- If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
Brain drain has been a concern of some for a long time.
I suspect attempted telekinesis is relevant.
The standard way to learn massage is through taking a course.
I would also recommend Betty Martin's 3-Minute game as a secular message like practice: https://www.youtube.com/watch?v=auokDp_EA80
Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?
just looking for a rough sketch
Well, you can probably go about it in the following way. IQ is and was a controversial concept. One of the lines of attack against it was that it is meaningless, that the number coming out of the IQ test does not correspond to anything in real life. This is often expressed as "IQ measures the skill of taking IQ tests".
To deal with this objection people ran a number of studies. Typically you take a set of young people and either give them a proper IQ test or rely on another test which is a decent IQ proxy -- usually the SAT in the US or one of the tests that the military gives to all its drafted or enlisted men. After that you follow that set of people and collect their life outcomes, from income to criminal records. Once you've done that you can see whether the measured IQ actually correlates to life outcomes. And yes, it does.
I don't have links to actual studies handy, but you can easily google them up, and you can take a look at a not-fully-rigorous description of the various tiers of IQ and what do they mean in real-life terms.
Basically what these studies give you is the cost of an IQ point, cost in terms of a lot of things -- income, chance to end up in prison, longevity (high-IQ people are noticeably healthier), etc.
Given this, you can calculate the expected outcomes for the US black population. If their average IQ is 10-15 points lower, you can translate this into expected income (lower than the US mean), expected chance of a criminal conviction (higher than the US mean) and other things you're interested in. Once you've done that, you can compare your expected values with ones empirically observed. Any remaining gap will be due to something other than the IQ differential.
informs your politics
On a macro level it does not. There are smart people, there are stupid people, and the correlation to some outwardly visible feature like the colour of the skin doesn't matter much. I am not a white nationalist, I do not think the Europeans should re-colonise Africa for the natives' own good, etc.
On a micro level it does. For example, I find affirmative action counter-productive. For another example, I don't believe the claims that inner-city schools (read: black) lag behind suburban schools (read: not black) because of lack of funding or because of surrounding poverty. Throwing money at the problem will achieve nothing.
Here's a more serious response.
- Segregating the world, period, based on whatever, is impossible without a coercive power that the existing nations of earth would consider illegal. Before you could forcefully migrate a large percentage of the world's humans you'd have to win a war with whatever portion of the UN stood against you.
- If you could do it, no one would admit to having any values other than those which got to live in/own the nicest places/stuff/be with their family / not be with their competitors/whatever. The technology to determine everyone's values does not exist.
- If you somehow derived everyone's values and split them by these, you would probably be condemning large segments of the population to misery (Lots of people's values are built around living around people who don't share them.), and there would be widespread resentment. The invincible force you used to overcome objection 1 would be tested within a generation.
Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
- No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
- We're told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who's right in a disagreement seems to be "the one who knows more relevant facts is more right" or "the one who more honestly and deeply considered the question". This does not appear to be an objectively measurable criterion (to say the least).
- The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn't understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants - some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
grumble grumble...
Look, I'm not pro-"Kill All Humans", but I don't think that last step is correct.
Bob can prefer that the human race die off and the earth spin uninhabited forever. It makes him evil, but there's no "logic error" in that, any more than there is in Al's preference that humanity spread out throughout the stars. They both envision future states and take actions that they believe will cause those states.
Is this sort of discrimination not consequential in your view?
I don't know about the study, I have a generic suspicion of social sciences studies, especially ones which come to highly convenient conclusions, and hey! they happen to have a what's politely called "replication crisis". I am not interested enough to go read the study and figure out if it's valid, but on my general priors, I believe that people with black names will get less callbacks. However it seems to me that people with names like Pham Ng or Li Xiu Ying will also get less callbacks. People certainly have a bias towards those-like-me, but it's not specifically anti-black, it's against anyone who looks/feels/smells different.
can you imagine a scenario in a society where a high IQ group of people was discriminated against to the extent where they couldn't overcome the discrimination, despite their advanced higher IQ?
Sure.
How would the circumstances be different than what blacks have faced in the U.S.?
Um, the IQ would be different? It's not a mystical inner quality that no one can fathom. It's measurable and on the scale of large groups of people the estimates gets pretty accurate.
On the clearly visible level there would be very obvious discrimination -- quotas on admissions to universities, for examples. These discriminated-against people would be barred from reaching high positions, but at the level they would be allowed to reach they would be considered very valuable. Even if, for example, such people could not make it into management, managers would try to hire as many of them as possible because they are productive and solve problems.
As to similarities, I was about to write that the discriminated-against will never rise to the highest positions in the society, but oh look! there is that Barack Hussain fellow...
It's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.
Yes. I think that we need not only workable solution, but also implementable. If someone create 800 pages pdf starting with new set theory, solution of Lob theorem problem etc and come to Google with it and say: "Hi, please, switch off all you have and implement this" - it will not work.
But MIRI added in 2016 the line of research for machine learning.
Link: http://www.vhemt.org/
It's very likely much bigger then 9800. It is also very balanced and laid back in its views and methods. I'd think that contributes.
We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.
I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...
Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.
Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.
I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.
The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.
To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what's necessary to achieve their good goals. We think they're wrong, but that's an outside view. As has been pointed out, the road to hell is never in need of repair.
Given this, it's useful to have firebreaks, boundaries which serve to stop really determined people who think they're doing good from doing too much evil. A major firebreak is emotional empathy -- it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.
And, besides, I like humans more than I like optimization algorithms :-P
In the Sequence, Eliezer made a strong case for the realist interpretation of QM (neo-Everettian many worlds), based on decoherence and Occam's razor. He then, in another point of the Sequence, tied that problem with interesting questions about anthropic probability (the infamous anthropic trilemma), and that cemented MWI as the preferred way to think about QM here.
On the other hand, I think we are still missing the big picture about quantum mechanics: ER = EPR, categorical quantum mechanics, QBism etc. all points us to interesting unexplored directions.
The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents.
I think the article is trying to help groups set up discourse norms that help people find the truth. (The update uses the phrase "socioepistemic virtue".) It's not so much about helping individuals defend against other individuals, as about helping groups defend their members against bad agents.
Actually, Big Pharma would LOSE billions if it works. There are only a few anti-virals, and none of them work well, and most need to be used in combinations.
No. Gilead manages to charge it's 1000$ per pill for an antiviral. If Draco for all viruses works it could also be sold for a similar price for a bunch of conditions like AIDS.
You could argue say that Gilead isn't really Big Pharma but Biotech but it still shows that there are companies that have no problem with bringing cures to market. Gilead also makes a lot of money.
The company that would bring a working drug to market that cures drugs like AIDS would make a lot of money even if a few other companies might lose billions from it.
An extra data point. If we crash and burn, then earth will be too hot for multicellular life by the time the coal and oil are replenished. So the one and only industrial revolution has happened.
And given ~4,000m years of life so far and the heating only a few hundred million years away, we only just made it. Which suggests it is pretty hard to build intelligent life. Maybe because computation is very expensive so the gradient is steep. Robin Hanson has a paper on this point.
Someone did a article about creating a Kickstarter that actually issued shares in a company if they went over big.
If it was a tax deduction if it failed, but allowed for a gain, then it might be a way to do projects that were popular with people, but not attractive to Big Pharma or VC.
You could even have "Hackerspaces" that brought together teams just to do projects. If they included housing, it would be a great way to give postdocs some work, and some visibility while they wait to get into a static lab.
An excellent post, but not Scott :)
White house also relized a pdf with concrete recommendations: http://barnoldlaw.blogspot.ru/2016/10/intelligence.html
Some interesting lines:
Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.
Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.
I'm getting really sick of this claim that Eliezer says all humans would agree on some morality under extrapolation. That claim is how we get garbage like this. At no point do I recall Eliezer saying psychopaths would definitely become moral under extrapolation. He did speculate about them possibly accepting modification. But the paper linked here repeatedly talks about ways to deal with disagreements which persist under extrapolation:
In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. (emphasis added)
Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.
(Naturally, Eugine Nier as "seer" downvoted all of my comments.)
The metaethics sequence does say IMNSHO that most humans' extrapolated volitions (maybe 95%) would converge on a cluster of goals which include moral ones. It furthermore suggests that this would apply to the Romans if we chose the 'right' method of extrapolation, though here my understanding gets hazier. In any case, the preferences that we would loosely call 'moral' today, and that also survive some workable extrapolation, are what I seem to mean by "morality".
One point about the ancient world: the Bhagavad Gita, produced by a warrior culture though seemingly not by the warrior caste, tells a story of the hero Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn't change his mind simply because of arguments about duty. In the climax, Krishna assumes his true form as a god of death with infinitely many heads and jaws, saying, 'I will eat all of these people regardless of what you do. The only deed you can truly accomplish is to follow your warrior duty or dharma.' This view seems plainly environment-dependent.
How does low IQ directly cause crime?
See any criminology textbook. Low IQ is a strong predictor of criminal behavior.
Why? This is more specuative
Inability to forsee consequences of actions.
Opportunity cost is lower - if you have a good chance to enjoy a good income through talent and hard work, then the alternative is less appealing.
Low IQ people are more likely to be at Kegan Development Level 2, which impairs empathy.
But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.
Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking.
You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.
But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...
If there truly are meaningful genetic differences between races, then so be it. But that seems to be the justification for the portion of "white supremacist" Trump supporters I mentioned above. It's an angry racism that seems likely to be problematic.
Well, as compared the hypothetical problems this "racism" or "white supremacism" might supposedly cause in the future; the type of "police and all whites are racist" anti-racism you are promoting is having problematic consequences right now, in the form of anti-police and generally anti-white rioting by blacks in places like Ferguson, Baltimore, Charlotte, etc. Not to mention that we'll never solve the problem of large amount of black-on-black crime if we can't admit it's cause.
These views seem very likely to lead to racism.
What do you mean by "racism". If you mean "the belief that people of different races differ in ability", then yes. Of cource, in that case being "racist" is in fact rational.
As Eliezer likes to say "that which can be destroyed by the truth should be".
Hm. These views seem very likely to lead to racism.
I've read Breitbart frequently since Steve Bannon was added to Trump's campaign because I'm fascinated with how Trump (an obvious hustler/fraud/charlatan in my view) has managed to get this close to the Oval Office. It's been illuminating (in a disturbing way) in understanding where I now believe a lot of the Trump support is coming from.
I'm confident a portion of his support is just Red-Team-no-matter-what Repubs. And some are one issue Pro-Life Christians. And some are fiscal conservatives who are sincerely just concerned about the debt and spending. And some are blue collar workers in areas (Ohio, Pennsylvania, etc.) where the global economy/technology caused manufacturing to dry up decades ago and they are mad as hell about the facts of the world and will just keep voting to change something, anything until they day they die...
But there is also this (disturbingly large) element of the movement that think non-white people are less than white people. Like, this group of Trump supporters are literally white supremacists—they believe white people are better suited for civilization. And, of course, no one can say that and politically get away with it in 2016, so they use all sorts of dog whistle-y language to imply it—including the main Trumpian slogan, "Make America Great Again™"
How can this not matter much?
Stupid people are still people. They have rights. Their propensity to make stupid decisions is not sufficient to take away from them the power to make decisions.
Is it because of the IQ difference you believe exists between black and whites?
Yes.
your reaction to those who are?
Is a shrug :-) People have all kinds of political beliefs, I don't find the white nationalists to be extraordinary.
As to re-colonising Africa, see the first paragraph :-)
Technically, you could believe that people are equally allowed to be enslaved.
In a sense, the ancient Romans did believe this. Anyone who ended up in the same situation - either taken as a war captive or unable to pay their debts - was liable to be sold as a slave. So what makes you think your position is objectively better than theirs?
"All men are created equal" emerges from two or more basic principles people are born with. You might say: "Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn't know them? No? Well, you don't know them; do they have value?
This assumes without argument that "value" is something people intrinsically have or can have. If instead you view value as value-to-someone, i.e. I value my loved ones, but someone else might not value them, then there is no problem.
And it turns out that yes, most people did not have an intuition that anyone has intrinsic value just by virtue of being human. Most people throughout history assigned value only to ingroup members, to the rich and powerful, and to personally valued individuals. The idea that people are intrinsically valuable is historically very new, still in the minority today globally, and for both these reasons doesn't seem like an idea everyone should naturally arrive at if they only try to universalize their intuitions a bit.
it's just that it's unclear to me how seriously we should take them at this stage
Well, categorical quantum mechanics is a program under developement since 2008, and it gives you a quantum framework in any computational theory with enough symmetries (databases, linguistics, etc).
It spawned quantum programming languages and a graphical calculus. So I think it's pretty succesful and has to be taken seriously, albeit it's far from being complete (it lacks a unified treatment of infinite systems, for example).
Related:
There's a cheesy children's book called Death is Wrong
Also, in this thread several people recount precisely what events in their lives caused them to become rationalists. Influential books include:
It's not clear to me why many efforts to raise the sanity water line are focused on adults. It seems like it would be more effective to try to teach children to think more like programmers, engineers, and scientists. I intend to use this sort of thing whenever I owe obligatory Christmas/birthday gifts to satisfy cultural norms.
Hey thanks for this. I had some time and I compiled this chronologically ordered list of links from those threads for personal use. https://my.mixtape.moe/nrbmyr.html
Someone could be against slavery for THEM personally without being against slavery in general if they didn't realize that what was wrong for them was also wrong for others.
Huh? I'm against going to jail personally without being against the idea of jail in general. In any case, wasn't your original argument that ancient Greeks and Romans just didn't understand what does it mean to be a slave? That clearly does not hold.
most moral theories are so bad you don't even need to talk about evidence. You can show them to be wrong just because they're incoherent or self-contradictory.
Do you mean descriptive or prescriptive moral theories? If descriptive, humans are incoherent and self-contradictory.
Which moral theories do you have in mind? A few examples will help.
Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it's right or wrong is to look for evidence?
You do understand that debates about objective vs relative morality has been going on for millenia?
They might also need to figure out that people are equal, too
No, they don't if they themselves are in danger of becoming slaves. Notably, a major source of slaves in the Ancient world was defeated armies. Slaves weren't clearly different people (like the blacks were in America), anyone could become a slave if his luck turned out to be really bad.
a bias against those-not-like-me would be sufficient in this case to cause blacks a significant deficit in opportunity for employment in a historically majority white nation.
Will it? I agree that it will cause some harm, but I'm not sure about "significant". Note that race-based discrimination is explicitly illegal and agencies such as EEOC do prosecute. Moreover, EEOC uses the concept of "disparate impact" which basically means that if you statistically discriminate regardless of your intent, you are in trouble.
Also, did a bias against those-not-like-me cause employment problems for, say, the Chinese? Why not?
You are saying black Americans have a genetic deficit in the form of lower average IQ.
I am saying people with African ancestry (regardless of their citizenship) belong to a gene pool which has average IQ lower than that of people with European ancestry. Lest you think that the whites are the pinnacle of evolution, the European gene pool has lower average IQ than, say, Han Chinese.
I don't know if "deficit" is a useful word -- there is no natural baseline and the fact that the IQ scale has the average IQ of Europeans as the "norm" (100) is just a historical accident. I think it's more correct to just say that different gene pools have different IQ distributions.
There are two separable questions here. The first one is do you agree that people with African ancestry have lower average IQ (by about one standard deviation) than people with European ancestry? That question has nothing to do with slavery and segregation. If you do not, we hit a major disagreement right here and there's not much point in discussing why contemporary black Americans have different outcomes than whites or Asians. If you do, we can move on to the second question: what is the relative role of various factors which determine the current state of the black Americans?
I might suggest the following approach. If you agree that the average IQ of blacks is lower, then let's estimate the effect of that on social outcomes. It might be that this cause will explain a great deal of what we observe. If so, there's no need to bring in the history of slavery and segregation as a major factor because there wouldn't be much left to explain.
I'd hypothesize slavery/segregation/discrimination has been consequential to the extent that even if blacks had a higher average IQ than whites, they would still be in a similar situation.
Ashkenazi Jews have higher average IQ than whites and were segregated and discrimated against. Are they in a similar situation? Were they in a similar situation at the time when the segregation was just ending?
Besides, you're forgetting that one can just go and measure IQ. There is a lot of data on the average IQ of racial groups in the US. Hint: American blacks do not have higher IQ.
Plainly, advanced IQ (or other genetic advantages) aren't enough to overcome significant discrimination in all cases.
Yes, but we're not talking about "all cases". We are talking about the very specific case of the United States of America.
Things can change. Slowly.
Um, things have changed. Already.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)