Truth: It's Not That Great

35 ChrisHallquist 04 May 2014 10:07PM

Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

-Yvain, "Extreme Rationality: It's Not That Great"

The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...

The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.

-Robin Hanson, "Who Loves Truth Most?"

A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."

I saw this, and commented:

<puts rubber Robin Hanson mask on>

What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.

</rubber Robin Hanson mask>

In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."

This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.

Or, consider statements like:

  • Some truths don't matter much.
  • People often have legitimate reasons for not wanting others to have certain truths.
  • The value of truth often has to be weighed against other goals.

Do these statements sound heretical to you? But what about:

  • Information can be perfectly accurate and also worthless. 
  • People often have legitimate reasons for not wanting other people to gain access to their private information. 
  • A desire for more information often has to be weighed against other goals. 

I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.

There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.

Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.

Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?

Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information. 

"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.

Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.

Comment author: lukeprog 28 April 2014 11:09:02PM 31 points [-]

I got a job via an internship via LW, and definitely lots of friends. Louie and I made online contact via LW, and he convinced me to intern with SIAI, and later I was offered a paying job there.

Comment author: ChrisHallquist 29 April 2014 04:27:29AM 24 points [-]

I love how understated this comment is.

Comment author: ChrisHallquist 29 April 2014 04:26:23AM -1 points [-]

People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.

Comment author: Jayson_Virissimo 08 April 2014 06:06:03PM *  11 points [-]

App Academy has been discussed here before and several Less Wrongers have attended (such as ChrisHallquist, Solvent, Curiouskid, and Jack).

I am considering attending myself during the summer and am soliciting advice pertaining to (i) maximing my chance of being accepted to the program and (ii) maximing the value I get out of my time in the program given that I am accepted. Thanks in advance.

EDIT: I ended up applying and just completed the first coding test. Wasn't too difficult. They give you 45 minutes, but I only needed < 20.

EDIT2: I have reached the interview stage. Thanks everyone for the help!

EDIT3: Finished the interview. Now awaiting AA's decision.

EDIT4: Yet another interview scheduled...this time with Kush Patel.

EDIT5: Got an acceptance e-mail. Decision time...

EDIT6: Am attending the August cohort in San Francisco.

Comment author: ChrisHallquist 09 April 2014 08:03:31PM 3 points [-]

Maximizing your chances of getting accepted: Not sure what to tell you. It's mostly about the coding questions, and the coding questions aren't that hard—"implement bubble sort" was one of the harder ones I got. At least, I don't think that's hard, but some people would struggle to do that. Some people "get" coding, some don't, and it seems to be hard to move people from one category to another.

Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the incoming cohort. Really. Ned, the lead instructor, knows what he's doing, and really cares about the students who go through App Academy. And he's seen what has worked or not worked for people in the past.

(I might also add, based on personal experience, "don't get cocky about the assessments." Also "get enough sleep," and should you end up in a winter cohort, "if you go home for Christmas, fly back a day earlier than necessary.")

Comment author: peter_hurford 28 March 2014 10:56:58PM 1 point [-]

for example, can we trust 80,000 Hours' estimates of the multiplier on giving to Give Well and GWWC? Might other organizations (such as the Centre for Effective Altruism, which is behind 80,000 Hours) be more effective at movement-building?

I'm pretty sure that 80K believes that a donation to CEA is the best for movement building.

Comment author: ChrisHallquist 29 March 2014 04:08:30AM 0 points [-]

Presumably. The question is whether we should accept that belief of theirs.

Effective Effective Altruism Fundraising and Movement-Building

2 ChrisHallquist 28 March 2014 09:05PM

The title of this post isn't a typo—its purpose is to ask how we can effectively do fundraising and movement-building for the effective altruism movement. This is an important question, because the return on these activities is potentially very high. As Robert Wiblin wrote on the topic of fundraising over a year ago:

GiveWell’s charity recommendations – currently Against Malaria Foundation, GiveDirectly and the Schistosomiasis Control Initiative – are generally regarded as the most reliable in their field. I imagine many readers here donate to these charities. This makes it all the more surprising that it should be pretty easy to start a charity more effective than any of them.

All you would need to do is found an organisation that fundraises for whoever GiveWell recommends, and raises more than a dollar with each dollar it receives. Is this hard? Probably not. As a general rule, a dollar spent on fundraising seems to raise at least several dollars.

Similarly, a more recent post at the 80,000 Hours blog asked "What cause is most effective?" and ended up concluding that "promoting effective altruism" was tied with "prioritization research" for the currently most effective cause. According to 80,000 Hours:

Promoting effective altruism is effective because it’s a flexible multiplier on the next most high-priority cause. It’s important because we expect the most high-priority areas to change a great deal, so it’s good to build up general capabilities to take the best opportunities as they are discovered. Moreover, in the recent past, investing in promoting effective altruism has resulted in significantly more resources being invested in the most high-priority areas, than investing in them directly. For instance, for every US$1 invested in GiveWell and Giving What We Can, more than $7 have been moved to high-priority interventions.

However, there are a number of questions to ask about this: for example, can we trust 80,000 Hours' estimates of the multiplier on giving to Give Well and GWWC? Might other organizations (such as the Centre for Effective Altruism, which is behind 80,000 Hours) be more effective at movement-building?

One interesting question is whether, from a movement-building perspective, it might make sense to (1) donate to an organization that both does movement building / cause-prioritization as well as making grants to object-level useful things (as GiveWell does) or (2) split your donation between an organization that does movement building and an organization that does object-level useful things. The rationale for this, particularly (2), is that donating exclusively to movement-building might not be the best thing for movement building, for a number of reasons:

  1. Donating exclusively to organizations focused on movement-building might hamper your ability to evangelize for effective altruism—people would quite justifiably be suspicious of an effective altruism movement that was too focused on movement-building.
  2. Similarly, from the point of view of the movement as a whole, people's justifiable suspicions of an EA movement too focused on movement building might lead to such a movement growing more slowly than an EA movement that was less focused on movement building.
  3. On why those suspicions on 1 and 2 are justified: even if an EA movement that was very focused on movement-building grew faster than one less focused on movement building, it could easily grow into the wrong kind of movement—one only good at self-promotion, not doing object-level useful things.
  4. Concrete successes by EA-backed charities may itself be very valuable for helping build the EA movement.
If splitting donations does make sense for reasons like these, then what should the ratio be? 50/50 is a tempting Schelling point. Another option would be to try to figure out the optimal ratio for the movement as a whole and make that your personal ratio. But other people may have better ideas on how to do such a split.

Comment author: Jiro 17 March 2014 07:58:18AM *  1 point [-]

The past 80+ comments from me have all had at least one downvote. There is no reasonable way to interpret this other than as having a stalker.

And the solution to how not to catch false positives is to use some common sense. You're never going to have an automated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.

Comment author: ChrisHallquist 17 March 2014 08:51:55PM -2 points [-]

And the solution to how not to catch false positives is to use some common sense. You're never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.

Right on. The solution to karma abuse isn't some sophisticated algorithm. It's extremely simple database queries, in plain english along the lines of "return list of downvotes by user A, and who was downvoted," "return downvotes on posts/comments by user B, and who cast the vote," and "return lists of downvotes by user A on user B."

Comment author: [deleted] 09 March 2014 06:54:29PM 5 points [-]

Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.

In response to comment by [deleted] on Rationality Quotes March 2014
Comment author: ChrisHallquist 10 March 2014 02:23:10AM 0 points [-]

Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.

This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.

Comment author: Yvain 07 March 2014 08:23:45PM 3 points [-]

I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.

But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

Examples?

Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age."

So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.

Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals.

I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right".

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority

I think it is much much less than the general public, but I don't think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn't a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don't seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true."

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

Comment author: ChrisHallquist 08 March 2014 07:05:05PM 0 points [-]

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.

That leaves us with "proper logical form," about which you said:

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.

Comment author: ChrisHallquist 04 March 2014 05:48:16AM 1 point [-]

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

View more: Prev | Next