Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Honesty and perjury

3 Benquo 17 January 2017 08:08AM
Comment author: hairyfigment 12 January 2017 06:15:52AM 1 point [-]

I'm talking here about the linked post. The author's first example shows the exact opposite of what she said she would show. She only gives one example of something that she called a pattern, so that's one person saying they should consider dishonesty and another person doing the opposite.

If you think there's a version of her argument that is not total crap, I suggest you write it or at least sketch it out.

Comment author: Benquo 12 January 2017 09:12:22AM 4 points [-]

Holding criticism to a higher standard than praise discourages people from calling out misrepresentations, which lowers the costs to liars of lying. I'd be surprised if Ben Todd were deliberately trying to clear a path for lies, but that's the direction things like that point.

Comment author: hairyfigment 12 January 2017 01:04:45AM 1 point [-]

She does eventually give an example of what she says she's talking about - one example from Facebook, when she claimed to be seeing a pattern in many statements. Before that she objects to the standard use of the English word "promise," in exactly the way we would expect from an autistic person who has no ability to understand normal humans. Of course this is also consistent with a dishonest writer trying to manipulate autistic readers for some reason. I assume she will welcome this criticism.

(Seriously, I objected to her Ra post because the last thing humanity needs is more demonology; but even I didn't expect her to urge "mistrusting Something that speaks through them," like they're actually the pawns of demons. "Something" is very wrong with this post.)

The presence of a charlatan like Gleb around EA is indeed disturbing. I seem to recall people suggesting they were slow to condemn him because EA people need data to believe anything, and lack any central authority who could declare him anathema.

Comment author: Benquo 12 January 2017 07:24:12AM 3 points [-]

I think that if you look at the actual epistemic content of this kind of demonology, it just cashes out to not committing the fundamental attribution error:

There are bad systems of behavior and thought that don't reflect the intrinsic badness of the individuals who express them, but rather broader social dynamics. There's selection pressure for social dynamics that justify, defend, and propagate themselves, so sometimes it can be intuitive to anthropomorphize them. A powerful agent for evil that can control otherwise good people's actions sounds like a demon.

Comment author: hairyfigment 12 January 2017 01:11:46AM 2 points [-]

Another note I forgot to add: the first quote, about criticism, sounds like Ben Todd being extremely open and honest regarding his motives.

Comment author: Benquo 12 January 2017 01:53:18AM 5 points [-]

Well, yes.

I think it's a bad motive and one that leads towards less openness and honesty, but Ben Todd personally is being very open and honest about it, which is right and virtuous and says good things about him as a human being, and his intentions. I think this gives things like EA a chance at avoiding various traps that we'd otherwise fall into for sure - but it's not a get-out-of-jail-free card.

[Link] EA Has A Lying Problem

12 Benquo 11 January 2017 10:31PM
Comment author: Benquo 11 January 2017 01:23:05AM 1 point [-]

A relevant anecdote from a friend:

He's been playing this new game called Generals. There's basically one dominant strategy against all humans, so when he's playing against a human, he sticks with the strategy, and focuses on execution - implementing the strategy faster and more reliably. This is Rajas - point at the target, and then go after it fast.

But the leaderboard is dominated by AIs and eventually he got to that level. So the important time started being between games; you can't beat the AI on reaction time. So he thought about how Lee Sedol had beat AlphaGo in one game. Answer: by pushing it into a part of Go-space it hadn't explored. It turned out that if he tried to play the strategy that's second or third best against humans, it was totally out of the AI's experience, so he could wipe the floor with it. This is Sattva - think your way around the problem. Take perspective.

People who are new to online strategy games tend to spend their initial games trying to stay alive instead of trying to accomplish their game goals (destroy the other player). This is Tamas. Just keep your head above water. Enough food in your body. Hide from threats. Live to fight another day. Not very adaptive in online games where death isn't very costly, but adaptive when facing real life threats.

Comment author: John_Maxwell_IV 22 December 2016 01:29:51PM 2 points [-]

Normally we think of the burden of proof resting on writers. But that is just a social convention. I haven't heard a consequentialist justification for this.

Comment author: Benquo 04 January 2017 10:33:48PM 1 point [-]

Posting or commenting imposes a cost in the form of a claim on the attention of your readers. It also provides a benefit in the form of information.

Perhaps the burden on writers should simply be to justify that their writing is relevant enough, and likely enough to be correct, to justify making this claim on readers' time and attention. This burden should be higher on shared fora than personal blogs, higher on posts than comments, higher for parent comments than replies, higher for off-topic than on-topic posts, higher for speculation than for fact posts.

Comment author: jsalvatier 26 December 2016 06:49:31PM 2 points [-]

I think you may be misunderstanding why people focus on selection mechanisms. Selection mechanisms can have big effects on both the private status returns to quality in comments (~5x) and the social returns to quality (~1000x). Similar effects are much less plausible with treatment effects.

Claim: selection mechanisms are much more powerful than treatment effects.

I think people are using the heuristic: If you want big changes in behavior, focus on incentives.

Selection mechanisms can make relatively big changes in the private status returns to making high quality comments by making high quality comments much more recognized and visible. That makes the authors higher status, which gives them good reason to invest more in making the comments. If you get 1000x the audience when you make high quality comments, you're going to feel substantially higher status.

Selection mechanisms can make the social returns to quality much larger by focusing people's attention on high quality comments (whereas before, many people might have had difficulty identifying high quality even after reading it).

Comment author: Benquo 04 January 2017 10:27:36PM *  0 points [-]

"More powerful" seems like it's implicitly using categories that don't cut at the joints. I think Aceso Under Glass's post on Tostan makes an important distinction between capacity-building and capacity-using interventions:

This is more speculative, but I feel like the most legible interventions are using something up. Charity Science: Health is producing very promising results with SMS vaccine reminders in India, but that’s because the system already had some built in capacity to use that intervention (a ~working telephone infrastructure, a populace with phones, government health infrastructure, medical research that identified a vaccine, vaccine manufacture infrastructure… are you noticing a theme here?). [...] Having that capacity and not using it was killing people. But I don’t think that CS’s intervention style will create much new capacity. For that you need inefficient, messy, special snowflake organizations.

I'd guess that treatment effects seem less powerful than selection effects of equal importance because treatment effects are typically more capacity-building loaded.

[Link] Exploitation as a Turing test

4 Benquo 04 January 2017 08:55PM
Comment author: Dagon 01 January 2017 04:41:56PM 2 points [-]

Givewell is different than those examples certainly. Your examples all include a clear motive to convince people to use their product, even if there are better out there. Givewell are analysts, not producers of good, and are explicitly trying to guide people to the best choice (within a set of constraints).

A better example would be choosing a restaurant. Michelin and Yelp have far more data and have put far more work into evaluating and rating food providers than you ever can. But you still need to figure out how your preferences fit into their evaluation framework, and navigate the always-changing landscape to make an actual choice.

(note that the conclusion is the same: you still must expend some search cost

Comment author: Benquo 02 January 2017 01:13:41AM 0 points [-]

I don't think "incentive" cuts at the joints here, but selection pressure does. You're going to hear about the best self-promoters targeting you, which is only an indicator of qualities you care about to the extent that those qualities contributes to self-promotion in that market.

Personal experience: I occasionally use Yelp, but in some cases it's worse than useless because I care about a pretty high standard of food quality, and often Yelp restaurant reviews are about whether the waiter was nice, the restaurant seemed fancy, the portions were big, sometimes people mark restaurants down for having inventive & therefore challenging food, etc. So I often get better information from the Chowhound message board, which no one except foodies has heard of.

View more: Next