Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

3 Stefan_Schubert 22 May 2017 06:31PM
Comment author: Stefan_Schubert 16 May 2017 07:56:05AM 1 point [-]

This was already posted a few links down.

[Link] Algorithmic tacit collusion

1 Stefan_Schubert 07 May 2017 02:57PM
Comment author: Stefan_Schubert 06 April 2017 03:57:33PM *  0 points [-]

One interesting aspect of posts like this is that they can, to some extent, be (felicitously) self-defeating.

[Link] Stuart Ritche reviews Keith Stanovich's book "The rationality quotient: Toward a test of rational thinking"

4 Stefan_Schubert 11 January 2017 11:51AM
Comment author: Stefan_Schubert 05 October 2016 05:01:23PM *  1 point [-]

As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.

Social effects of algorithms that accurately identify human behaviour and traits

1 Stefan_Schubert 14 May 2016 10:48AM

Related to: Could auto-generated troll scores reduce Twitter and Facebook harassments?, Do we underuse the genetic heuristic? and Book review of The Reputation Society (part I, part II).


Today, algorithms can accurately identify personality traits and levels of competence from computer-observable data. FiveLabs and YouAreWhatYouLike are, for instance, able to reliably identify your personality traits from what you've written and liked on Facebook. Similarly, it's now possible for algorithms to fairly accurately identify how empathetic counselors and therapists are, and to identify online trolls. Automatic grading of essays is getting increasingly sophisticated. Recruiters rely to an increasing extent on algorithms, which, for instance, are better at predicting levels of job retention among low-skilled workers than human recruiters.

These sorts of algorithms will no doubt become more accurate, and cheaper to train, in the future. With improved speech recognition, it will presumably be possible to assess both IQ and personality traits through letting your device overhear longer conversations. This could be extremely useful to, e.g. intelligence services or recruiters.

Because such algorithms could identify competent and benevolent people, they could provide a means to better social decisions. Now an alternative route to better decisions is by identifying, e.g. factual claims as true or false, or arguments as valid or invalid. Numerous companies are working on such issues, with some measure of success, but especially when it comes to more complex and theoretical facts or arguments, this seems quite hard. It seems to me unlikely that we will have algorithms that are able to point out subtle fallacies anytime soon. By comparison, it seems like it would be much easier for algorithms to assess people's IQ or personality traits by looking at superficial features of word use and other readily observable behaviour. As we have seen, algorithms are already able to do that to some extent, and significant improvements in the near future seem possible.

Thus, rather than improving our social decisions by letting algorithms adjudicate the object-level claims and arguments, we rather use them to give reliable ad hominem-arguments against the participants in the debate. To wit, rather than letting our algorithms show that certain politicians claims are false and that his arguments are invalid, we let them point out that they are less than brilliant and have sociopathic tendencies. The latter seems to me significantly easier (even though it by no means will be easy: it might take a long time before we have such algorithms).

Now for these algorithms to lead to better social decisions, it is of course not enough that they are accurate: they must also be perceived as such by relevant decision-makers. In recruiting and the intelligence service, it seems likely that they will to an increasing degree, even though there will of course be some resistance. The resistance will probably be higher among voters, many of which might prefer their own judgements of politicians to deferring to an algorithm. However, if the algorithms were sufficiently accurate, it seems unlikely that they wouldn't have profound effects on election results. Whoever the algorithms favour would scream their results from the roof-tops, and it seems likely that this will affect undecided voters.

Besides better political decisions, these algorithms could also lead to more competent rule in other areas in society. This might affect, e.g. GDP and the rate of progress.

What would be the impact for existential risk? It seems likely to me that if algorithms led to the rule of the competent and the benevolent, that would lead to more efforts to reduce existential risks, to more co-operation in the world, and to better rule in general, and that all of these factors would reduce existential risks. However, there might also be countervailing considerations. These technologies could have a large impact on society, and lead to chains of events which are very hard to predict. My initial hunch is that they mostly would play a positive role for X-risk, however.

Could these technologies be held back for reasons of integrity? It seems that secret use of these technologies to assess someone during everyday conversation could potentially be outlawed. It seems to me far less likely that it would be prohibited to use them to assess, e.g. a politician's intelligence, trustworthiness and benevolence. However, these things, too, are hard to predict.

Comment author: RyanCarey 11 May 2016 03:00:17AM 2 points [-]

If you get a well labelled dataset, I think this is pretty thoroughly within the scope of current machine learning technologies, but that means spending perhaps hundreds of hours labelling papers as a certain amount postmodern out of 100. If you're trying to single out the postmodernism that you're convinced is total BS, then that's more complex. Doable but you need to make the case to me about why it would be worthwhile, and what exactly your aim would be.

Comment author: Stefan_Schubert 11 May 2016 01:39:09PM 0 points [-]

Thanks Ryan, that's helpful. Yes, I'm not sure one would be able to do something that has the right combination of accuracy, interestingness and low-cost at present.

Comment author: RyanCarey 10 May 2016 04:21:05PM 0 points [-]

If you had a million labelled postmodern and non-postmodern papers, you could decently identify them.

You could categorise most papers with fewer labels using citation graphs.

You can recommend papers, as you would Amazon books with a recommender system (using ratings).

There are hundreds of ways to apply machine learning to academic articles; it's a matter of deciding what you want the machine learning to do.

Comment author: Stefan_Schubert 10 May 2016 05:08:53PM 0 points [-]

Sure, I guess my question was whether you'd think that it'd be possible to do this in a way that would resonate with readers. Would they find the estimates of quality, or level of postmodernism, intuitively plausible?

My hunch was that the classification would primarily be based on patterns of word use, but you're right that it would probably be fruitful to use at patterns of citations.

Comment author: Stefan_Schubert 10 May 2016 10:26:20AM *  3 points [-]

deleted

View more: Next