Comment author: Matthew_Opitz 23 April 2016 04:09:11PM *  4 points [-]

There are also some examples of anti-sleepwalk bias:
1. World War I. The crisis unfolded over more than a month. Surely the diplomats will work something out right? Nope.
2. Germany's invasion of the Soviet Union in World War II. Surely some of Hitler's generals will speak up and persuade Hitler away from this crazy plan when Germany has not even finished the first part of the war against Britain. Surely Germany would not willingly put itself into another two-front war even after many generals had explicitly decided that Germany must never get involved in another two-front war ever again. Right? Nope.
3. The sinking of the Titanic. Surely, with over two and a half hours to react to the iceberg impact before the ship finished sinking, SURELY there would be enough time to get all of the lifeboats safely and calmly loaded up to near max capacity, right? NOPE. And going even further back to the decision to not put enough lifeboats on in the first place...SURELY the White Star Line must have a good reason for this. SURELY this means that the ship really is unsinkable, right? NOPE.
4. The 2008 financial crisis. SURELY the monetary authorities have solved the problem of preventing recessions and smoothing out the business cycle. So SURELY I as a private trader can afford to be as reckless as I want and not have to worry about systemic risk, etc.

In response to comment by gjm on Suppose HBD is True
Comment author: OrphanWilde 21 April 2016 05:51:37PM -1 points [-]

Further: let's suppose, at least for the sake of argument, that you're very nearly right, that in our hypothetical HBD-is-right world you get scarcely any extra useful information from a person's race once you've looked at a few other equally trivial characteristics.

The issue isn't that there isn't extra useful information, the issue is that we're pretty terrible at quickly processing variable dependence to arrive at correct answers, where rapid processing is part of the situation in consideration.

In that kind of situation, clothing alone will tell you more than clothing plus race - not because you couldn't arrive at a better answer given more information, but because the additional information is almost certainly going to be overweighted by virtue of the brain not having a good intuitive handle on either dependent variables or small numbers.

Comment author: Matthew_Opitz 22 April 2016 02:54:51PM 1 point [-]

I don't know...would clothing alone tell you more than clothing plus race? I think we would need to test this.

Is a poorly-dressed Irish-American (or at least, someone who looks Irish-American with bright red hair and pale white skin) as statistically likely to mug someone, given a certain situation (deserted street at night, etc.) as a poorly-dressed African-American? For reasons of political correctness, I would not like to share my pre-suppositions.

I will say, however, that, in certain historical contexts (1840s, for example), my money would have been on the Irish-American being more likely to mug me, and I would have taken more precautionary measures to avoid those Irish parts of town, whereas I would have expected the neighborhoods inhabited by free blacks to have been relatively safe.

Nowadays, I don't know what the statistics would be if you measured crimes perpetrated by certain races, when adjusted for socio-economic category (in other words, comparing poor to poor, or wealth to wealthy in each group). But many people would probably have their suspicions. So, can we test these intuitions to see if they are just bigoted racism, or if they unfortunately happen to be accurate generalizations?

Comment author: TheAncientGeek 22 April 2016 11:36:49AM *  0 points [-]

Are US employers forbidden from setting all meet based tests, or just IQ tests?

Because task-specific tests aren't just an alternative to IQ tests, they're a better alternative in almost every case.

Comment author: Matthew_Opitz 22 April 2016 02:32:49PM 0 points [-]

True in many cases, although for some jobs the task might not be well-specified in advance (such as in some cutting-edge tech jobs), and what you need are not necessarily people with any particular domain-specific skills, but rather just people who are good all-around adaptable thinkers and learners.

Comment author: knb 20 April 2016 12:30:28AM 2 points [-]

Looks like Andrea Rossi's E-Cat cold fusion scam is finally reaching its end-phase. Some previous LW discussion here, here and here.

Comment author: Matthew_Opitz 21 April 2016 10:57:33PM 2 points [-]

Yeah, what a hoot it has been watching this whole debacle slowly unfold! Someone should really write a long retrospective on the E-Cat controversy as a case-study in applying rationality to assess claims.

My priors about Andrea Rossi's claims were informed by things such as: 1. He has been convicted of fraud before. (Strongly negative factor) 2. The idea of this type of cold fusion has been deemed by most scientists to be far-fetched. (Weakly negative factor. Nobody has claimed that physics is a solved domain, and I'm always open to new ideas...)

From there, I updated on the following evidence: 1. Rossi received apparent lukewarm endorsement from several professional scientists. (Weakly positive factor. Still didn't mean a whole lot.) 2. Rossi dragged his feet on doing a clear, transparent, independently-conducted calorimetric test of his device—something that many people were willing to do for him, and which is not rocket science to perform. (Strongly negative factor—strongly pattern-matches with a fraudster). 3. Rossi claimed to have received independent contracts for licensing his device. First Defkalion in Greece, then Industrial Heat. Rossi also made various claims about NASA and Texas Instruments being involved. When investigated, the claims about the reputable organizations being involved turned out to be exaggerations, and the other partners were either of unknown reputation (Defkalion) that quickly disappeared, or had close ties to Rossi himself. Still no independent validation. (Strongly negative factor).

And now we arrive at the point where even Industrial Heat is breaking ties with Rossi. What a fun show!

In response to comment by sight on Suppose HBD is True
Comment author: OrphanWilde 21 April 2016 07:33:52PM -1 points [-]

It's further back the pipeline than hiring - there just aren't very many black programmers - so trying to solve the problem at the hiring stage is solving the wrong problem.

Comment author: Matthew_Opitz 21 April 2016 10:26:04PM 1 point [-]

That just pushes the question back one step, though: why are there so few black programmers? Lack of encouragement in school (due to racial assumptions that they would not be any good at this stuff anyways)? Lack of stimulation of curiosity in programming in elementary school due to poor funding for electronics in the classroom that has nothing to do with conscious racism per se? (This would be an environmental factor not having to do with conscious racism, but rather instead having to do with inherited lack of socio-economic capital, living in a poor inner city, etc.) Lack of genetic aptitude for these tasks? HBD could be relevant to how we address this problem. Do we mandate racial-sensitivity training courses, increased federal funding for electronics in inner-city schools, and/or genetic modification? Even if we do all three, which should we devote the most funding towards?

In response to comment by sight on Suppose HBD is True
Comment author: OrphanWilde 21 April 2016 07:45:32PM -1 points [-]

So you're saying the social sciences are failing because black people are less intelligent than white people and they can't admit it.

Okay. How would one go about falsifying this belief of yours? What evidence would change your mind?

Comment author: Matthew_Opitz 21 April 2016 10:19:55PM 5 points [-]

One argument could be that many social scientists are being led down a blind alley of trying to find environmental causes of all sorts of differences and are being erroneously predisposed to find such causes in their data to a stronger extent than is really the case, which then leads to incorrect conclusions and policy recommendations that will not actually change things for the better because the policy recommendations end up not addressing what is the vast majority of the root of the problem (genetics, in this case).

In response to Suppose HBD is True
Comment author: Matthew_Opitz 21 April 2016 10:09:28PM *  5 points [-]

Estimating a person's capability to do X, Y, or Z (do a job effectively, be a law-abiding citizen, be a consistently productive citizen not dependent on welfare programs, etc.) based on skin color or geographical origin of their ancestry is a heuristic.

HBD argues that it is a relatively accurate heuristic. The anti-HBD crowd argues that it is an inaccurate heuristic.

OrphanWilde seems to be arguing that, even if HBD is correct that these heuristics are relatively accurate, we don't need heuristics like this in the first place because there are even better heuristics or more direct measurements of a person's individual capability to do X, Y, or Z already out there. (IQ, interviews, etc.)

The HBD advocates here seem to be arguing that we do, in fact, need group-based heuristics because individual heuristics:
*1. Are more costly in terms of time, and are thus just not feasible for many applications.
*2. Don't really exist for certain measures, such as in estimating "probable future law-abidingness" or "probable future welfare dependency".
*3. Have political restrictions on being able to apply them. (For example, we COULD use formal IQ tests on job applicants, but such things have been made illegal precisely because they seem to paint a higher proportion of blacks in a bad light).

Perhaps OrphanWilde might like to respond to these objections. Here's how I would respond:
*1. The costliness of individual judgment is warranted because using group-based heuristics has politically-toxic spillovers, and might miss out on important outliers (by settling on local optima at the expense of global optima). We are not trying to screen out defective widgets from an assembly line (in which case a quick but "lossy" sorting heuristic might be justified). We are trying to sort people. The costliness of mis-sorting even a small percentage of individuals (for example, by heuristically rejecting a black man who happens (unbeknowst to us without doing the individual evaluation) to have an IQ of 150 from a certain job) outweighs the cost-saving of using quick group-based heuristics: both because it will inevitably politically anger the black community, with all sorts of politically toxic spillovers, and because we are missing out on a disproportionate goldmine of economic potential by missing these outliers.
*2. If individual tests for probable law-abidingness or probable economic productivity don't currently exist, then maybe we should try to develop them! Is that so impossible? Personally, I find it a bit unbelievable that the U.S. does not currently have tests for certain agreed-upon foundational cultural values as part of its immigration screening process. For example, if applicants had to respond to questions such as, "Explain why impartial fairness towards strangers rather than favoritism towards friends and relatives is an essential aspect of national citizenship and professional behavior" or "Explain the advantages of dis-establishment of religion from the political and legal affairs of the state" then I would sleep much more easily at night about our immigration policy.
*3. Well, perhaps we should campaign to overturn the political restrictions on individual merit-based tests by pointing out that the only de-facto alternative that people will have is to use group-based tests of some sort or another (whether employers or other institutions openly admit to using such group-based heuristics or not, they will find a way to do so), and that group-based heuristics will actually hurt disadvantaged groups even more. In other words, unless you want all appointments in society to be decided by random casting of lots, people need some sort of criteria for judging others. Given this, it would be better to have individual-based tests rather than group-based tests. Even if the individual-based tests will end up showing "disparate impact" on certain groups, it will still be less than if we used group-based tests.

(Edit: formatting improved upon request).

In response to Black box knowledge
Comment author: Matthew_Opitz 05 March 2016 12:08:55AM 2 points [-]

Some of your black box examples seem unproblematic. I agree that all you need to trust that a toaster will toast bread is an induction from repeated observation that bread goes in and toast comes out.

(Although, if the toaster is truly a black box about which we know absolutely NOTHING, then how can we induce that the toaster will not suddenly start shooting out popsicles or little green leprechauns when the year 2017 arrives? In reality, a toaster is nothing close to a black box. It is more like a gray box. Even if you think you know nothing about how a toaster works, you really do know quite a bit about how a toaster works by virtue of being a reasonably intelligent adult who understands a little bit about general physics--enough to know that a toaster is never going to start shooting out leprechauns. In fact, I would wager that there are very few true "black boxes" in the world--but rather, many gray boxes of varying shades of gray).

However, the tax accountant and the car mechanic seem to be even more problematic as examples of black boxes because there is intelligent agency behind them--agency that can analyze YOUR source code, determine the extent to which you think those things are a black box, and adjust their output accordingly. For example, how do you know that your car will be fixed if you bring it to the mechanic? If the mechanic knows that you consider automotive repair to be a complete black box, the mechanic could have an incentive to purposefully screw up the alignment or the transmission or something that would necessitate more repairs in the future, and you would have no way of telling where those problems came from. Or, the car mechanic could just lie about how much the repairs would cost, and how would you know any better? Ditto with the tax accountant.

The tax accountant and the car mechanic are a bit like AIs...except AIs would presumably be much more capable at scanning our source code and taking advantage of our ignorance of its black-box nature.

Here's another metaphor: in my mind, the problem of humanity confronting AI is a bit like the problem that a mentally-retarded billionaire would face.

Imagine that you are a mentally-retarded person with the mind of a two-year-old who has suddenly just come into possession of a billion dollars in a society where there is no state or higher authority to regulate enforce any sort of morality or make sure that things are "fair." How are you going to ensure that your money will be managed in your interest? How can you keep your money from being outright stolen from you?

I would assert that there would be, in fact, no way at all for you to have your money employed in your interest. Consider:

*Do you hire a money manager (a financial advisor, a bank, a CEO...any sort of money manager)? What would keep this money manager from taking all of your money and running away with it? (Remember, there is no higher authority to punish this money manager in this scenario). If you were as smart or smarter than the money manager, you could probably track down this money manager and take your money back. But you are not as smart as the money manager. You are a mentally-retarded person with the mind of a toddler. And in that case where you did happen to be as smart as the money manager, then the money manager would be redundant in the first place. You would just manage your own money.

*Do you try to manage your money on your own? Remember, you have the mind of a two-year-old. The best you can do is stumble around on the floor and say "Goo-goo-gah-gah." What are you going to be able to do with a billion dollars?

Neither solution in this metaphor is satisfactory.

In this metaphor: *The two-year-old billionaire is humanity. *The lack of a higher authority symbolizes the absence of a God to punish an AI. *The money manager is like AI.

If an AI is a black box, then you are screwed. If an AI is not a black box, then what do you need the AI for?

Humans only work as black-boxes (or rather, gray-boxes) because we have an instinctual desire to be altruistic to other humans. We don't take advantage of each other. (And this does not apply equally to all people. Sociopaths and tribalistic people would happily take advantage of strangers. And I would allege that a world civilization made up of entirely these types of people would be deeply dysfunctional).

So, here's how we might keep an AI from becoming a total black-box, while still allowing it to do useful work:

*Let it run for a minute in a room unconnected to the Internet. *Afterwards, hiring a hundred million programmers to trace out exactly what the AI was doing in that minute by looking at a readout of the most base-level code of the AI.

To any one of these programmers, the rest of the AI that does not happen to be that programmer's special area of expertise will seem like a black box. But, through communication, humanity could pool their specialized investigations into each part of the AIs running code and sketch out an overall picture of whether its computations were on a friendly trajectory or not.

Comment author: Val 04 March 2016 06:18:01PM 0 points [-]

Let's assume such an AI could be created perfectly.

Wouldn't there be a danger of freezing human values forever to the values of the society which created it?

Imagine somehow the Victorian people (using steampunk or whatever) managed to build such an AI, and that AI would forever enforce their values. Would you be happy with every single value it enforced?

Comment author: Matthew_Opitz 04 March 2016 07:49:17PM 3 points [-]

I don't want to speak for the original author, but I imagine that presumably the AI would take into account that the Victorian society's culture was changing based on its interactions with the AI, and that the AI would try to safeguard the new, updated values...until such a time as those new values became obsolete as well.

In other words, it sounds like under this scheme the AI's conception of human values would not be hardcoded. Instead, it would observe our affect to see what sorts of new activities had become terminal in their own right that made us intrinsically happy to participate in, and the AI would adapt to this change in human culture to facilitate the achievement of those new activities.

That said, I'm still unsure about how one could guarantee that the AI could not hack its own "human affect detector" to make it very easy for itself by forcing smiles on everyone's face under torture and defining torture as the preferred human activity.

Comment author: Matthew_Opitz 03 March 2016 07:27:05PM 1 point [-]

Okay, so let's use some concrete examples to see if I understand this abstract correctly.

You say that the chain of causation is from fitness (natural selection) ---> outcomes ---> activities

So, for example: reproduction ---> sex ---> flirting/dancing/tattooing/money/bodybuilding.

Natural selection programs us to have a terminal goal of reproduction. HOWEVER, it would be a bad idea for an AI to conclude, "OK, humans want reproduction? I'll give them reproduction. I'll help the humans reproduce 10 quadrillion people. The more reproduction, the better, right?"

The AI would need to look ahead and see, "OK, the programmed goal of reproduction has caused humans to prefer a specific outcome, sex, which tended to lead to reproduction in the original (ancestral) programming environment, but might no longer do so. Humans have, in other words, come to cherish sex as a terminal goal in its own right through their affective responses to its reward payoff. So, let's make sure that humans can have as much sex as possible, regardless of whether it will really lead to more reproduction. That will make humans happy, right?"

But then the AI would need to look ahead one step further and see, "OK, the preferred outcome of sex has, in turn, caused humans to enjoy, for their own sake, specific activities that, in the experience and learning of particular humans in their singular lifetimes (we are no longer talking about instinctual programming here, but rather culture), has tended in their particular circumstances, to lead to this preferred outcome of sex. In one culture, humans found that flirting tended to lead to sex, and so they formed a positive affective connotation with flirting and came to view flirting as a terminal goal in its own right. In another culture, dancing appeared to be the key to sex, and so dancing became a terminal goal in that culture. In other cultures, bodybuilding, accumulation of money, etc. seemed to lead to sex, and so humans became attached to those activities for their own sake, even beyond the extent to which those activities continued to lead to more sex. So really, the way to make these humans happy would be to pay attention to their particular cultures and psychologies and see which activities they have come to develop a positive affective bond with...because THESE activities have become the humans' new conscious terminal goals. So we AI robots should work hard to make it easy for the humans to engage in as much flirting/dancing/bodybuilding/money accumulation/etc. as possible."

Would this be an accurate example of what you are talking about?

View more: Next