Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Honesty and perjury

4 Benquo 17 January 2017 08:08AM

[Link] EA Has A Lying Problem

12 Benquo 11 January 2017 10:31PM

[Link] Exploitation as a Turing test

4 Benquo 04 January 2017 08:55PM

Claim explainer: donor lotteries and returns to scale

3 Benquo 30 December 2016 07:46PM

Sometimes, new technical developments in the discourse around effective altruism can be difficult to understand if you're not already aware of the underlying principles involved. I'm going to try to explain the connection between one such new development and an important underlying claim. In particular, I'm going to explain the connection between donor lotteries (as recently implemented by Carl Shulman in cooperation with Paul Christiano)1 and returns to scale. (This year I’m making a $100 contribution to this donor lottery, largely for symbolic purposes to support the concept.)

I'm not sure I'm adding much to Carl's original post on making bets to take advantage of returns to scale with this explainer. Please let me know whether you think this added anything or not.

What is a donor lottery?

Imagine ten people each have $1,000 to give to charity this year. They pool their money, and draw one of their names out of a hat. The winner gets to decide how to give away all $10,000. This is an example of a donor lottery.

More generally, a donor lottery is an arrangement where a group of people pool their money and pick one person to give it away. This selection is randomized so that each person has a probability of being selected proportional to their initial contribution.

Selfish reasons to gamble

Let's start with the case of a non-charitable expenditure. Usually, for consumption decisions, we have what economists call diminishing marginal utility. This is because we have limited ability to consume things, and also because we make the best purchases first.

Food is an example of something we have limited appetite for. After a certain point, we just aren't hungry anymore. But we also but the more important things first. Your first couple dollars a day make the difference between going hungry and having enough food. Your next couple dollars a day go to buying convenience or substituting higher-quality-foods, which is a material improvement, but nowhere near as big as the difference between starving and fed.

To take a case that's less universal, but maybe easier to understand the principle in, let's say I'm outfitting a kitchen, and own no knives. I can buy one of two knives – a small knife or a large one. The large knife can do a good job cutting large things, and a bad job cutting small things. The small knife can do a good job cutting small things, and a bad job cutting large things. If I buy one of these knives, I get the benefit of being able to cut things at all for both large and small items, plus the benefit of being able to cut things well in one category. If I buy the second knife, I only improve the situation by the difference between being able to cut things poorly in one category, and being able to cut them well. This is a smaller difference. I'd rather have one knife with certainty, than a 50% chance of getting both.

But sometimes, returns to consumption are increasing. Let's say that I have a spare thousand dollars after meeting all my needs, and there's only one more thing in the world I want that money can buy – a brand-new $100,000 sports car, unique enough that there are no reasonable substitutes. The $1,000 does me no good at all, $99,000 would do me no good at all, but as soon as I have $100,000, I can buy that car.

One thing I might want to do in this situation is gamble. If I can go to a casino and make a bet that has a 1% chance of multiplying my money a hundredfold (ignoring the house's cut for simplicity), then this is a good deal. Here's why. In the situation where I don't make the bet, I have a 100% chance of getting no value from the money. In the situation where I do make the bet, I have a 99% chance of losing the money, which I don't mind since I had no use for it anyway, but a 1% chance of being able to afford that sports car.

But since in practice the house does take a cut at casinos, and winnings are taxed, I might get a better deal by pooling my money together with 100 other like-minded people, and selecting one person at random to get the car. This way, 99% of us are no worse off, and one person gets a car.

The sports car scenario may seem far-fetched, especially once you take into account the prospect of saving up for things, or unexpected expenses. But it's not too far from the principle behind the susu, or ROSCA:

Susus are generally made up of groups of family members or friends, each of whom pledges to put a certain amount of money into a central pot each week. That pot, presided over by a treasurer, whose honesty is vouched for by his or her respected standing among the participants, is then given to one member of the group.
Over the course of a susu's life, each member will receive a payout exactly equal to the total he has put in, which could range from a handful of dollar bills to several thousand dollars; members earn no interest on the money they set aside. After a complete cycle, the members either regroup and start over or go their separate ways.

In communities where people either don't have access to savings or don't have the self-control to avoid spending down their savings on short-run emergencies, the susu is the opposite of consumption smoothing - it enables participants to bunch their spending together to make important long-run investments.2

A susu bears a strong resemblance to a partially randomized version of a donor lottery, for private gain.

Gambling for the greater good

Similarly, if you’re trying to do the most good with your money, you might want to take into account returns to scale. As in the case of consumption, the "normal" case is diminishing returns to scale, because you're going to want to fund the best things you know of first. But you might think that the returns to scale are increasing in one of two ways:

  • Diminishing marginal costs
  • Increasing marginal returns

Diminishing marginal costs

Let’s say that your charity budget for this year is $5,000, and your best guess is that it will take about five hours of research to make a satisfactory giving decision. You expect that you’ll be giving to charities for which $5,000 is a small amount, so that they have roughly constant returns to scale with respect to your donation. (This matters because what we care about are benefits, not costs.) In particular, for the sake of simplicity, let’s say that you think that the best charity you’re likely to find can add a healthy year to someone’s life for $250, so your donation can buy 20 life-years.

Under these circumstances, suppose that someone you trust offers you a bet with a 90% probability of getting nothing, and a 10% probability of getting back ten times what you put in. In this case, if you make a $5,000 bet, your expected giving is 10% * 10 * $5,000 = $5,000, the same as before. And if you expect the same impact per dollar up to $50,000, then if you win, your donation saves $50,000 / $250 = 200 life-years for beneficiaries of this charity. Since you only have a 10% chance of winning, your expected impact is 20 life-years, same as before.

But you only need to spend time evaluating charities if you win, so your expected time expenditure is 10% * 5 = 0.5 hours. This is strictly better – you have the same expected impact, for a tenth the expected research time.

These numbers are made up and in practice you don’t know what the impact of your time will be, but the point is that if you’re constrained by time to evaluate donations, you can get a better deal through lotteries.

Increasing marginal benefits

The smooth case

Of course, if you’re giving away $50,000, you might be motivated to spend more than five hours on this. Let’s say that you think that you can find a charity that’s 10% more effective if you spend ten hours on it. Then in the winning scenario, you’re spending an extra five hours to save an extra 20 lives, not a bad deal. Your expected lives saved is then 22, higher than in the original case, and your expected time allocation is 1 hour, still much less than before.

The lumpy case

Let’s say that you know someone considering launching a new program, which you believe would be a better value per dollar than anything else you can find in a reasonable amount of time. But they can only run the program if they get a substantial amount of initial funds; for half as much, they can’t do anything. They’ve tried a “kickstarter” style pledge drive, but there aren’t enough other donors interested. You have a good reason to believe that this isn’t because you’re mistaken about the program.

You’d fund the whole thing yourself, but you only have 10% of the needed funds on hand. Once again, you’d want to play the odds.

Lotteries, double-counting, and shared values

One objection I’ve seen potential participants raise against donor lotteries is that they’d feel obliged to take into account the values of other participants if they won. This objection is probably related to the prevalence of double-counting schemes to motivate people to give.

I previously wrote about ways in which "matching donation" drives only seem like they double your impact because of double-counting:

But the main problem with matching donation fundraisers is that even when they aren't lying about the matching donor's counterfactual behavior, they misrepresent the situation by overassigning credit for funds raised.
I'll illustrate this with a toy example. Let's say that a charity - call it Good Works - has two potential donors, Alice and Bob, who each have $1 to give, and don't know each other. Alice decides to double her impact by pledging to match the next $1 of donations. If this works, and someone gives because of her match offer, then she'll have caused $2 to go to Good Works. Bob sees the match offer and reasons similarly: if he gives $1, this causes another $1 to go to Good Works, so his impact is doubled - he'll have caused Good Works to receive $2.
But if Alice and Bob each assess their impact as $2 of donations, then the total assessed impact is $4 - even though Good Works only receives $2. This is what I mean when I say that credit is overassigned - if you add up the amount of funding each donor is supposed to have caused, you get number that exceeds the total amount of funds raised.

If you tried to justify donor lotteries this way, it would look like this: Let's say you and nine other people each put in $10,000. You have a 10% chance of getting to give away $100,000. But if you lose, the other nine people still want to give to something that fulfills your values at least somewhat. So you are giving away more than $10,000 in expectation. This is double-counting because if you apply it consistently to each member of the group in turn, it assigns credit for more funding than the entire group is responsible for. It only works if you think you're getting one over on the other people if you win.

For instance, maybe you'd really spend your winnings on a sports car, but giving the money to an effective charity seems better than nothing, so they're fulfilling your values, but you're not fulfilling theirs.

Naturally, some people feel bad about getting one over on people, and consequently feel some obligation to take their values into account.

There are some circumstances under which this could be reasonable. People could be pooling their donations even though they're risk-averse about charities, simply in order to economize on research time. But in the central case of donor lotteries, everyone likes the deal they're getting, even if the estimate the value of other donors' planned use of the money at zero.

The right way to evaluate the expected value of a donor lottery is to only take the deal if you'd take the same deal from a casino or financial instrument where you didn't think you were value-aligned with your counterparty. Assume, if you will, that everyone else just wants a sports car. If you do this, you won't double-count your impact by pretending that you win even if you lose.

Claim: returns to scale for individual donations

Donor lotteries were originally proposed as a response to an argument based on returns to scale:

  • Some effective altruists used “lumpy” returns to scale (for instance, where extra money matters only when it tips the balance over to hiring an additional person) to justify favoring charities that turn funds into impact more smoothly.
  • Some effective altruists say that small donors should defer to GiveWell’s recommendations because for the time it makes to spend on allocating a small donation, they shouldn’t expect to do better than GiveWell.

In his original post on making use of randomization to increase scale, Carl Shulman summarizes the case against these arguments:

In a recent blog post Will MacAskill described a donation opportunity that he thought was attractive, but less so for him personally because his donation was smaller than a critical threshold:

This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager.

When this is true, it can be better to exchange a donation for a 1/50 chance of a donation 50 times as large. One might also think that when donating $1,000,000 rather than $1 one can afford to spend more time and effort in evaluating opportunities, get more access to charities, and otherwise enjoy some of the advantages possessed by large foundations.
Insofar as one believes that there are such advantages, it doesn't make sense to be defeatist about obtaining them. In some ways resources like GiveWell and Giving What We Can are designed to let the effective altruist community mimic a large effectiveness-oriented foundation. One can give to the Gates Foundation, or substitute for Good Ventures to keep its cash reserves high.
However, it is also possible to take advantage of economies of scale by purchasing a lottery (in one form or another), a small chance of a large donation payoff. In the event the large donation case arises, then great efforts can be made to use it wisely and to exploit the economies of scale.

There's more than one reason you might choose to trust the recommendations of GiveWell or Giving What We Can, or directly give to either, or to the Gates Foundation. One consideration is that there are returns to scale for delegating your decisions to larger organizations. Insofar as this is why donors give based on GiveWell recommendations, GiveWell is serving as a sort of nonrandomized donor lottery in which the GiveWell founders declared themselves the winners in advance. The benefit of this structure is that it's available. The obvious disadvantage is that it's hard to verify shared values.

Of course, there are other good reasons why you might give based on GiveWell's recommendation. For instance, you might especially trust their judgment based on their track record. The proposal of donor lotteries is interesting because it separates out the returns to scale consideration, so it can be dealt with on its own, instead of being conflated with other things.

Even if your current best guess is that you should trust the recommendations of a larger donor, if you are uncertain about this, and expect that spending time thinking it through would help make your decision better, then a donor lottery allows you to allocate that research time more efficiently, and make better delegation decisions. There's nothing stopping you from giving to a larger organization if you win, and decide that's the best thing. So, the implications of a position on returns to scale are:

  • If you think that there are increasing returns to scale for the amount of money you have to allocate, then you should be interested in giving money to larger donors who share your values, or giving based on their recommendations. But you should be even more interested in participating in a donor lottery.
  • If you think that there are diminishing returns to scale for the amount of money you have to move, then you should not be interested in giving money to larger donors, participating in a donor lottery, accepting money from smaller donors, or making recommendations for smaller donors to follow.

With those implications in mind, here are some claims it might be good to argue about:

(Cross-posted to my personal blog and Arbital.)


1 This phrasing was suggested by Paul. Here's how Carl describes their roles: "I came up with the idea and basic method, then asked Paul if he would provide a donor lottery facility. He did so, and has been taking in entrants and solving logistical issues as they come up."

2More on susus here and here. More on ROSCAs here, here, here, and here.

When I was trying to find where I'd originally heard about these and didn't remember what they were called, I Googled for poor people in developing countries using lotteries as savings, but most relevant-looking results were about using lotteries to trick poor people into saving. Almost none were about what poor people were already doing to solve their actually existing problems. It turns out, sometimes the poor can do financial engineering when they need to. The global poor aren't necessarily poor because they're stupid or helpless. Seems pretty plausible that in many cases, they're poor because they haven't got enough money.

[Link] The engineer and the diplomat

14 Benquo 27 December 2016 08:49PM

Improve comments by tagging claims

13 Benquo 20 December 2016 05:04PM

I used to think that comments didn’t matter. I was wrong. This is important because communities of discourse are an important source of knowledge. I’ll explain why I changed my mind, and then propose a simple mechanism for improving them, that can be implemented on any platform that allows threaded comments.

continue reading »

Canons (What are they good for?)

9 Benquo 13 December 2016 09:34PM

People in the Effective Altruist and Rationalist intellectual communities have been discussing moving discourse back into the public sphere lately. I agree with this goal and want to help. There are reasons to think that we need not only public discourse, but public fora. One reason is that there's value specifically in having a public set of canonical writing that members of an intellectual community are expected to have read. Another is that writers want to be heard, and on fora where people can easily comment, it's easier to tell whether people are listening and benefiting from your writing.

This post begins with a brief review of the case for public discourse. For reasons I hope to make clear in an upcoming post, I encourage people who want to comment on that to click through to the posts I linked to by Sarah Constantin and Anna Salamon. For another perspective you can read my prior post on this topic, Be secretly wrong. The second section explores the case for a community canon, suggesting that there are three distinct desiderata that can be optimized for separately.

This is an essay exploring and introducing a few ideas, not advancing an argument.

Why public discourse?

People have been discussing moving discourse back into the public sphere lately. Sarah Constantin has argued that public criticism-friendly discussion is important for truth-seeking and creating knowledge capital:

There seems to have been an overall drift towards social networks as opposed to blogs and forums, and in particular things like:

  • the drift of political commentary from personal blogs to “media” aggregators like The AtlanticVox, and Breitbart
  • the migration of fandom from LiveJournal to Tumblr
  • The movement of links and discussions to Facebook and Twitter as opposed to link-blogs and comment sections

[...]

But one thing I have noticed personally is that people have gotten intimidated by more formal and public kinds of online conversation.  I know quite a few people who used to keep a “real blog” and have become afraid to touch it, preferring instead to chat on social media.  It’s a weird kind of locus for perfectionism — nobody ever imagined that blogs were meant to be masterpieces.  But I do see people fleeing towards more ephemeral, more stream-of-consciousness types of communication, or communication that involves no words at all (reblogging, image-sharing, etc.)  There seems to be a fear of becoming too visible as a distinctive writing voice.

For one rather public and hilarious example, witness Scott Alexander’s  flight from LessWrong to LiveJournal to a personal blog to Twitter and Tumblr, in hopes that somewhere he can find a place isolated enough that nobody will notice his insight and humor. (It hasn’t been working.)

[...]

A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind.  The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there.  If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.

[...]

We talk a lot about social media killing privacy, but there’s also a way in which it kills publicness, by allowing people to curate their spaces by personal friend groups, and retreat from open discussions.   In a public square, any rando can ask an aristocrat to explain himself.  If people hide from public squares, they can’t be exposed to Socrates’ questions.

I suspect that, especially for people who are even minor VIPs (my level of online fame, while modest, is enough to create some of this effect), it’s tempting to become less available to the “public”, less willing to engage with strangers, even those who seem friendly and interesting.  I think it’s worth fighting this temptation.  You don’t get the gains of open discussion if you close yourself off.  You may not capture all the gains yourself, but that’s how the tragedy of the commons works; a bunch of people have to cooperate and trust if they’re going to build good stuff together.  And what that means, concretely, on the margin, is taking more time to explain yourself and engage intellectually with people who, from your perspective, look dumb, clueless, crankish, or uncool.

Some of the people I admire most, including theoretical computer scientist Scott Aaronson, are notable for taking the time to carefully debunk crackpots (and offer them the benefit of the doubt in case they are in fact correct.)  Is this activity a great ROI for a brilliant scientist, from a narrowly selfish perspective?  No. But it’s praiseworthy, because it contributes to a truly open discussion. If scientists take the time to investigate weird claims from randos, they’re doing the work of proving that science is a universal and systematic way of thinking, not just an elite club of insiders.  In the long run, it’s very important that somebody be doing that groundwork.

Talking about interesting things, with friendly strangers, in a spirit of welcoming open discussion and accountability rather than fleeing from it, seems really underappreciated today, and I think it’s time to make an explicit push towards building places online that have that quality.

In that spirit, I’d like to recommend LessWrong to my readers. For those not familiar with it, it’s a discussion forum devoted to things like cognitive science, AI, and related topics, and, back in its heyday a few years ago, it was suffused with the nerdy-discussion-nature. It had all the enthusiasm of late-night dorm-room philosophy discussions — except that some of the people you’d be having the discussions with were among the most creative people of our generation.  These days, posting and commenting is a lot sparser, and the energy is gone, but I and some other old-timers are trying to rekindle it. I’m crossposting all my blog posts there from now on, and I encourage everyone to check out and join the discussions there.

I agree that we need to move more discussion back into enduring public media so that we can make stable intellectual progress, and outsiders can catch up if they have something to contribute - especially if we're wrong about something they know about. And Anna Salamon's also suggested that common fora such as LessWrong are an especially important means of creating a single conversation:

We need to think about [...] everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

We have lately ceased to have a "single conversation" in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

It seems to me, moreover, that Less Wrong used to be such a locus, and that it is worth seeing whether Less Wrong or some similar such place[3] may be a viable locus again. [...]

I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed). Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort. (At least if we can build up toward in fact having a single locus.)

My initial thinking was that we should just move from the ephemeral medium of Facebook to people having their own personal blogs. The nice thing about a blogosphere as a mode of discourse is that community boundaries aren't a huge deal - if you think someone's unhelpful, there's nowhere you have to boot them out from - you just stop reading their stuff and linking them. But this interferes with the  "single canon" approach.

What are canons good for?

When I hear people talk about getting communities to read the same things, they often bring up very different sorts of benefits. So far I count three very different things they mean:

  1. Common basic skills and norms
  2. The shoulders of giants
  3. Synchronized discussions

Common basic skills and norms

Some have pointed to LessWrong's Sequences - a series of blog posts by Eliezer Yudkowsky on the art of human rationality - as an example of the kind of text that should be a community canon. I do think that the extended LessWrong community has benefited from internalizing the insights laid out in the Sequences. Lots of cognitive wasted motions common elsewhere seem less common in this community because we know better.

This kind of canon plays an analogous role to professional training among, say, engineers - or in a liberal arts education. You don't necessarily expect a liberally educated person to know each particular book you reference, but you do expect them to know what math, and music, and literature, and philosophy are like on the inside. Not every "liberal arts college" does this anymore, but some still do, and people who get it can recognize each other and have conversations that are simply unavailable between us and people without that background. Not because the material is impossible to communicate, but because there are way too many steps, and because it's not just about getting across a particular argument. It's about different ways of perceiving the world.

Similarly, if I'm talking to someone who has read and understood the Sequences, there are places we can go, quickly, that it might take a long time to explore with someone else who hasn't stumbled across the same insights elsewhere.

This benefit from having a canon requires a substantial investment. For this reason, taking an existing community and pushing a canon on it seems unlikely to work very well without a very large investment. Traditional Western academia did this for a while, but that depended on the authority of existing elite scholars, and a large, hierarchical system that had a near monopoly on literacy and concomitant employment. Judaism seems in some large part formed by its canon, but the process that knit together a tight literary core ended up interposing layers of commentary beginning with the written Talmud in between Jews and the text of the Bible itself.

Taking a canon and forming a community around it, composed of people who find it compelling, seems more tractable right now. LessWrong coalesced around the Sequences (and later, the CFAR workshops). Objectivism coalesced around The Fountainhead and Atlas Shrugged. My alma mater, St. John's College, reformed as a community around the New (Great Books) Program and the students and tutors it attracted.

This suggests to me that the thing to do is to figure out how to teach the skills and norms you want in your community, and see who joins.

The shoulders of giants

A second reason for a canon is so that we don't have to retreat old ground. This is not so much about the unarticulated, holistic seeing that comes from having read a corpus of text together, but from having common knowledge of specific accumulated insights.

This is why academics publish in journals, and typically begin papers by reviewing (and citing) prior work on the subject. The academic journal model does not really depend at all for published discourse on having a single uniform canon. Instead, it relies on common availability of prior sources. A norm of citing, linking, or otherwise directing the reader to prior work is more adaptable to this purpose, because it does not so fully exclude outsiders as a community where everyone is expected to have already learned the thing.

I've tried to follow something like these practices myself, linking to prior work on a subject that's influenced me when I'm aware of it. Much of the Rationalist and EA blogosphere works on this model at least sometimes. But I can think of one thing that could be useful for the academic journal model that doesn't yet exist in the Rationalist or EA communities: a stable archive of prior work, that brings the different sources together - blog posts, academic papers, and personal web pages that advance the Rationalist project. Right now, there's no search capability here, no Google Scholar equivalent to look up "works citing this one" or "works cited by this one." (If you want to advance this project, let me know if you want me to put you in touch with others interested in making this happen.)

Synchronized discussions

When a TV show becomes popular, people who like it often come together to watch it, or discuss each episode, sharing their reactions to it and speculating about characters' motivations or what will happen next. This sort of agenda-setting makes it much easier to have large, complex conversations about such things, while the text (in this case, the episode) is still fresh in everyone's mind. Serialized stories such as Harry Potter and the Methods of Rationality, or the original Harry Potter series, or Unsong, have a similar coordinating effect. If your community's largely following the same texts at the same time, then when you meet a stranger at a party, you don't get stuck talking about the weather.

Getting everyone looking at the same thing at the same time can also spark productive disagreement. If something will be out in public forever, you can put off commenting on it. But if now's the time for everyone to talk about it, now's your only chance to speak up if someone is wrong on the internet.

Tiers and volleys

The synchronization of conversations can be a powerful force for extracting additional value from the intellectual labor people are already doing, and getting them to share their perspectives more promptly and publicly. But if an intellectual community doesn't have people going off on their own, doing self-directed work of the kind that can lead to more academic journal style discourse, then it won't produce deep original work of the kind that may be needed to steer the world in a substantially better direction.

Creating a sort of community "TV show" and a forum for people to comment on it is the least expensive way to extract additional public value from the intellectual activity already going on. Slate Star Codex has to some degree taken over LessWrong's role as a community hub, and provides a good starting point - its author, Scott Alexander, was kind enough to link to a few posts in the attempted LessWrong renaissance, and perhaps will do so again if and when that or some similar effort shows substantially more progress. But I don't expect this on its own to lead to the kind of deep intellectual progress we need.

Some people are already doing work at something more like an academic tempo, doing a fair amount of research on their own, and sharing what they find afterwards. Building a better archive/repository of existing work seems like it could substantially increase the impact of the work people are already doing. And if done well, it could lead to an increase in truly generative, deep work - and maybe even more importantly, less progress lost to the sands of time.

I expect building something like academic journals for the community, and persuading more people to do this sort of intellectual labor, to be substantially slower work, at least if done right, and it will only be worth it if done right. It will require many people to invest substantial intellectual effort, though hopefully they'd want to think deeply about things anyway.

Creating a common conceptual vocabulary, skills, and norms, by contrast, can be very expensive. A full-blown liberal arts education is famously pricey. The Sequences took a year of Eliezer Yudkowsky's time, and I don't think he worked on much else. CFAR has several full-time employees who've been working at it for years. This approach - especially the high-touch version where you educate people in person - is to be used sparingly, when you have a strong reason to believe that you can produce a large improvement that way.

(Cross-posted at my personal blog.)

Be secretly wrong

27 Benquo 10 December 2016 07:06AM

"I feel like I'm not the sort of person who's allowed to have opinions about the important issues like AI risk."

"What's the bad thing that might happen if you expressed your opinion?"

"It would be wrong in some way I hadn't foreseen, and people would think less of me."

"Do you think less of other people who have wrong opinions?"

"Not if they change their minds when confronted with the evidence."

"Would you do that?"

"Yeah."

"Do you think other people think less of those who do that?"

"No."

"Well, if it's alright for other people to make mistakes, what makes YOU so special?"

A lot of my otherwise very smart and thoughtful friends seem to have a mental block around thinking on certain topics, because they're the sort of topics Important People have Important Opinions around. There seem to be two very different reasons for this sort of block:

  1. Being wrong feels bad.
  2. They might lose the respect of others.

Be wrong

If you don't have an opinion, you can hold onto the fantasy that someday, once you figure the thing out, you'll end up having a right opinion. But if you put yourself out there with an opinion that's unmistakably your own, you don't have that excuse anymore.

This is related to the desire to pass tests. The smart kids go through school and are taught - explicitly or tacitly - that as long as they get good grades they're doing OK, and if they try at all they can get good grades. So when they bump up against a problem that might actually be hard, there's a strong impulse to look away, to redirect to something else. So they do.

You have to understand that this system is not real, it's just a game. In real life you have to be straight-up wrong sometimes. So you may as well get it over with.

If you expect to be wrong when you guess, then you're already wrong, and paying the price for it. As Eugene Gendlin said:

What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it.

What you would be mistaken about, you're already mistaken about. Owning up to it doesn't make you any more mistaken. Not being open about it doesn't make it go away.

"You're already "wrong" in the sense that your anticipations aren't perfectly aligned with reality. You just haven't put yourself in a situation where you've openly tried to guess the teacher's password. But if you want more power over the world, you need to focus your uncertainty - and this only reliably makes you righter if you repeatedly test your beliefs. Which means sometimes being wrong, and noticing. (And then, of course, changing your mind.)

Being wrong is how you learn - by testing hypotheses.

In secret

Getting used to being wrong - forming the boldest hypotheses your current beliefs can truly justify so that you can correct your model based on the data - is painful and I don't have a good solution to getting over it except to tough it out. But there's a part of the problem we can separate out, which is - the pain of being wrong publicly.

When I attended a Toastmasters club, one of the things I liked a lot about giving speeches there was that the stakes were low in terms of the content. If I were giving a presentation at work, I had to worry about my generic presentation skills, but also whether the way I was presenting it was a good match for my audience, and also whether the idea I was pitching was a good strategic move for the company or my career, and also whether the information I was presenting was accurate. At Toastmasters, all the content-related stakes were gone. No one with the power to promote or fire me was present. Everyone was on my side, and the group was all about helping each other get better. So all I had to think about was the form of my speech.

Once I'd learned some general presentations at Toastmasters, it became easier to give talks where I did care about the content and there were real-world consequences to the quality of the talk. I'd gotten practice on the form of public speaking separately - so now I could relax about that, and just focus on getting the content right.

Similarly, expressing opinions publicly can be stressful because of the work of generating likely hypotheses, and revealing to yourself that you are farther behind in understanding things than you thought - but also because of the perceived social consequences of sounding stupid. You can at least isolate the last factor, by starting out thinking things through in secret. This works by separating epistemic uncertainty from social confidence. (This is closely related to the dichotomy between social and objective respect.)

Of course, as soon as you can stand to do this in public, that's better - you'll learn faster, you'll get help. But if you're not there yet, this is a step along the way. If the choice is between having private opinions and having none, have private opinions. (Also related: If we can't lie to others, we will lie to ourselves.)

Read and discuss a book on a topic you want to have opinions about, with one trusted friend. Start a secret blog - or just take notes. Practice having opinions at all, that you can be wrong about, before you worry about being accountable for your opinions. One step at a time.

Before you're publicly right, consider being secretly wrong. Better to be secretly wrong, than secretly not even wrong.

(Cross-posted at my personal blog.)

[Link] Mic-Ra-finance and the illusion of control

4 Benquo 07 December 2016 08:00PM

[Link] On Philosophers Against Malaria

2 Benquo 07 December 2016 02:03AM

View more: Next