Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Stuart Ritche reviews Keith Stanovich's book "The rationality quotient: Toward a test of rational thinking"

4 Stefan_Schubert 11 January 2017 11:51AM
Comment author: Stefan_Schubert 05 October 2016 05:01:23PM *  1 point [-]

As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.

Social effects of algorithms that accurately identify human behaviour and traits

1 Stefan_Schubert 14 May 2016 10:48AM

Related to: Could auto-generated troll scores reduce Twitter and Facebook harassments?, Do we underuse the genetic heuristic? and Book review of The Reputation Society (part I, part II).


Today, algorithms can accurately identify personality traits and levels of competence from computer-observable data. FiveLabs and YouAreWhatYouLike are, for instance, able to reliably identify your personality traits from what you've written and liked on Facebook. Similarly, it's now possible for algorithms to fairly accurately identify how empathetic counselors and therapists are, and to identify online trolls. Automatic grading of essays is getting increasingly sophisticated. Recruiters rely to an increasing extent on algorithms, which, for instance, are better at predicting levels of job retention among low-skilled workers than human recruiters.

These sorts of algorithms will no doubt become more accurate, and cheaper to train, in the future. With improved speech recognition, it will presumably be possible to assess both IQ and personality traits through letting your device overhear longer conversations. This could be extremely useful to, e.g. intelligence services or recruiters.

Because such algorithms could identify competent and benevolent people, they could provide a means to better social decisions. Now an alternative route to better decisions is by identifying, e.g. factual claims as true or false, or arguments as valid or invalid. Numerous companies are working on such issues, with some measure of success, but especially when it comes to more complex and theoretical facts or arguments, this seems quite hard. It seems to me unlikely that we will have algorithms that are able to point out subtle fallacies anytime soon. By comparison, it seems like it would be much easier for algorithms to assess people's IQ or personality traits by looking at superficial features of word use and other readily observable behaviour. As we have seen, algorithms are already able to do that to some extent, and significant improvements in the near future seem possible.

Thus, rather than improving our social decisions by letting algorithms adjudicate the object-level claims and arguments, we rather use them to give reliable ad hominem-arguments against the participants in the debate. To wit, rather than letting our algorithms show that certain politicians claims are false and that his arguments are invalid, we let them point out that they are less than brilliant and have sociopathic tendencies. The latter seems to me significantly easier (even though it by no means will be easy: it might take a long time before we have such algorithms).

Now for these algorithms to lead to better social decisions, it is of course not enough that they are accurate: they must also be perceived as such by relevant decision-makers. In recruiting and the intelligence service, it seems likely that they will to an increasing degree, even though there will of course be some resistance. The resistance will probably be higher among voters, many of which might prefer their own judgements of politicians to deferring to an algorithm. However, if the algorithms were sufficiently accurate, it seems unlikely that they wouldn't have profound effects on election results. Whoever the algorithms favour would scream their results from the roof-tops, and it seems likely that this will affect undecided voters.

Besides better political decisions, these algorithms could also lead to more competent rule in other areas in society. This might affect, e.g. GDP and the rate of progress.

What would be the impact for existential risk? It seems likely to me that if algorithms led to the rule of the competent and the benevolent, that would lead to more efforts to reduce existential risks, to more co-operation in the world, and to better rule in general, and that all of these factors would reduce existential risks. However, there might also be countervailing considerations. These technologies could have a large impact on society, and lead to chains of events which are very hard to predict. My initial hunch is that they mostly would play a positive role for X-risk, however.

Could these technologies be held back for reasons of integrity? It seems that secret use of these technologies to assess someone during everyday conversation could potentially be outlawed. It seems to me far less likely that it would be prohibited to use them to assess, e.g. a politician's intelligence, trustworthiness and benevolence. However, these things, too, are hard to predict.

Comment author: RyanCarey 11 May 2016 03:00:17AM 2 points [-]

If you get a well labelled dataset, I think this is pretty thoroughly within the scope of current machine learning technologies, but that means spending perhaps hundreds of hours labelling papers as a certain amount postmodern out of 100. If you're trying to single out the postmodernism that you're convinced is total BS, then that's more complex. Doable but you need to make the case to me about why it would be worthwhile, and what exactly your aim would be.

Comment author: Stefan_Schubert 11 May 2016 01:39:09PM 0 points [-]

Thanks Ryan, that's helpful. Yes, I'm not sure one would be able to do something that has the right combination of accuracy, interestingness and low-cost at present.

Comment author: RyanCarey 10 May 2016 04:21:05PM 0 points [-]

If you had a million labelled postmodern and non-postmodern papers, you could decently identify them.

You could categorise most papers with fewer labels using citation graphs.

You can recommend papers, as you would Amazon books with a recommender system (using ratings).

There are hundreds of ways to apply machine learning to academic articles; it's a matter of deciding what you want the machine learning to do.

Comment author: Stefan_Schubert 10 May 2016 05:08:53PM 0 points [-]

Sure, I guess my question was whether you'd think that it'd be possible to do this in a way that would resonate with readers. Would they find the estimates of quality, or level of postmodernism, intuitively plausible?

My hunch was that the classification would primarily be based on patterns of word use, but you're right that it would probably be fruitful to use at patterns of citations.

Comment author: Stefan_Schubert 10 May 2016 10:26:20AM *  3 points [-]

deleted

Comment author: gjm 01 May 2016 07:32:07PM 0 points [-]

Asking scientists to keep their paper titles hedge-drift-resistant means (1) asking each individual scientist to do something that will reduce the visibility of their work relative to others', for the sake of a global benefit -- a class of policy that for obvious reasons doesn't have a great track record -- and (2) asking them to give their papers titles that are boring and wordy.

I agree that the world might be a better place if scientists consistently did this. But it doesn't seem very likely to happen.

(Also, here's what might happen if they almost consistently did this: the better, more conscientious scientists all write carefully hedged articles with carefully hedged titles, and journalists ignore all of them because they all sound like "Correlational analysis of OCEAN traits weakly suggest slight association between conscientiousness and Y-chromosome haplogroup O3". A few less careful scientists write lower-quality papers that, among other things, have titles like "The Chinese work harder: correlational analysis of OCEAN traits and genotype", and those are the ones that the journalists pick up. These are also the ones without the careful hedging in the actual analysis, without serious attempts to correct for multiple correlations, etc. So we end up with worse stuff in the press.)

Comment author: Stefan_Schubert 02 May 2016 09:55:06AM *  0 points [-]

Good points. I agree that what you write within parentheses is a potential problem. Indeed, it is a problem for many kinds of far-reaching norms on altruistic behaviour compliance with which is hard to observe: they might handicap conscientious people relative to less conscientious people to such an extent that the norms do more harm than good.

I also agree that individualistic solutions to collective problems have a chequered record. The point of 1)-3) was rather to indicate how you potentially could reduce hedge drift, given that you want to do that. To get scientists and others to want to reduce hedge drift is probably a harder problem.

In conversation, Ben Levinstein suggested that it is partly the editors' role to frame articles in a way such that hedge drift doesn't occur. There is something to that, though it is of course also true that editors often have incentives to encourage hedge drift as well.

Hedge drift and advanced motte-and-bailey

22 Stefan_Schubert 01 May 2016 02:45PM

Motte and bailey is a technique by which one protects an interesting but hard-to-defend view by making it similar to a less interesting but more defensible position. Whenever the more interesting position - the bailey - is attacked - one retreats to the more defensible one - the motte -, but when the attackers are gone, one expands again to the bailey. 

In that case, one and the same person switches between two interpretations of the original claim. Here, I rather want to focus on situations where different people make different interpretations of the original claim. The originator of the claim adds a number of caveats and hedges to their claim, which makes it more defensible, but less striking and sometimes also less interesting.* When others refer to the same claim, the caveats and hedges gradually disappear, however, making it more and more motte-like.

A salient example of this is that scientific claims (particularly in messy fields like psychology and economics) often come with a number of caveats and hedges, which tend to get lost when re-told. This is especially so when media writes about these claims, but even other scientists often fail to properly transmit all the hedges and caveats that come with them.

Since this happens over and over again, people probably do expect their hedges to drift to some extent. Indeed, it would not surprise me if some people actually want hedge drift to occur. Such a strategy effectively amounts to a more effective, because less observable, version of the motte-and-bailey-strategy. Rather than switching back and forth between the motte and the bailey - something which is at least moderately observable, and also usually relies on some amount of vagueness, which is undesirable - you let others spread the bailey version of your claim, whilst you sit safe in the motte. This way, you get what you want - the spread of the bailey version - in a much safer way.

Even when people don't use this strategy intentionally, you could argue that they should expect hedge drift, and that omitting to take action against it is, if not ouright intellectually dishonest, then at least approaching that. This argument would rest on the consequentialist notion that if you have strong reasons to believe that some negative event will occur, and you could prevent it from happening by fairly simple means, then you have an obligation to do so. I certainly do think that scientists should do more to prevent their views from being garbled via hedge drift. 

Another way of expressing all this is by saying that when including hedging or caveats, scientists often seem to seek plausible deniability ("I included these hedges; it's not my fault if they were misinterpreted"). They don't actually try to prevent their claims from being misunderstood. 

What concrete steps could one then take to prevent hedge-drift? Here are some suggestions. I am sure there are many more.

  1. Many authors use eye-catching, hedge-free titles and/or abstracts, and then only include hedges in the paper itself. This is a recipe for hedge-drift and should be avoided.
  2. Make abundantly clear, preferably in the abstract, just how dependent the conclusions are on keys and assumptions. Say this not in a way that enables you to claim plausible deniability in case someone misinterprets you, but in a way that actually reduces the risk of hedge-drift as much as possible. 
  3. Explicitly caution against hedge drift, using that term or a similar one, in the abstract of the paper.

* Edited 2/5 2016. By hedges and caveats I mean terms like "somewhat" ("x reduces y somewhat"), "slightly", etc, as well as modelling assumptions without which the conclusions don't follow and qualifications regarding domains in which the thesis don't hold.

Comment author: James_Miller 22 April 2016 07:18:56PM *  2 points [-]

Related: Scott Adams' Law of Slow Moving Disasters

"whenever humanity can see a slow-moving disaster coming, we find a way to avoid it. Let’s run through some examples:

Thomas Malthus famously predicted that the world would run out of food as the population grew. Instead, humans improved their farming technology.

When I was a kid, it was generally assumed that the world would be destroyed by a global nuclear war. The world has been close to nuclear disaster a few times, but so far we’ve avoided all-out nuclear war.

The world was supposed to run out of oil by now, but instead we keep finding new ways to extract it from the ground. The United States has unexpectedly become a net provider of energy.

The debt problem in the United States was supposed to destroy the economy. Instead, the deficit is shrinking, the stock market is surging, and the price of gold is plummeting."

Comment author: Stefan_Schubert 24 April 2016 12:07:04PM 2 points [-]

Thanks. My claim is somewhat different, though. Adams says that "whenever humanity can see a slow-moving disaster coming, we find a way to avoid it". This is an all-things-considered claim. My claim is rather that sleepwalk bias is a pro-tanto consideration indicating that we're too pessimistic about future disasters (perhaps especially slow-moving ones). I'm not claiming that we never sleepwalk into a disaster. Indeed, there might be stronger countervailing considerations, which if true would mean that all things considered we are too optimistic about existential risk.

Comment author: Matthew_Opitz 23 April 2016 04:09:11PM *  4 points [-]

There are also some examples of anti-sleepwalk bias:
1. World War I. The crisis unfolded over more than a month. Surely the diplomats will work something out right? Nope.
2. Germany's invasion of the Soviet Union in World War II. Surely some of Hitler's generals will speak up and persuade Hitler away from this crazy plan when Germany has not even finished the first part of the war against Britain. Surely Germany would not willingly put itself into another two-front war even after many generals had explicitly decided that Germany must never get involved in another two-front war ever again. Right? Nope.
3. The sinking of the Titanic. Surely, with over two and a half hours to react to the iceberg impact before the ship finished sinking, SURELY there would be enough time to get all of the lifeboats safely and calmly loaded up to near max capacity, right? NOPE. And going even further back to the decision to not put enough lifeboats on in the first place...SURELY the White Star Line must have a good reason for this. SURELY this means that the ship really is unsinkable, right? NOPE.
4. The 2008 financial crisis. SURELY the monetary authorities have solved the problem of preventing recessions and smoothing out the business cycle. So SURELY I as a private trader can afford to be as reckless as I want and not have to worry about systemic risk, etc.

Comment author: Stefan_Schubert 24 April 2016 11:59:47AM *  0 points [-]

It is not quite clear to me whether you are here just talking about instances of sleepwalking, or whether you are also talking about a predictive error indicating anti-sleepwalking bias: i.e. that they wrongly predicted that the relevant actors would act, yet they sleepwalked into a disaster.

Also, my claim is not that sleepwalking never occurs, but that people on average seem to think that it happens more often than it actually does.

View more: Next