Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: paulfchristiano 28 November 2016 05:53:42PM 2 points [-]

You said:

That only helps if your "rationalist community" in fact pushes you to more accurate reasoning... in my experience the "rationalist community" is mostly that in name only.

I find this claim unsettling, since the rationalist community aggressively promotes an unusual set of epistemic norms (e.g. lots of reliance on logic and numeracy, on careful scrutiny of sources and claims, a trade in debunking explanations) which appear to me to be unusually good at producing true beliefs. You presumably have experience with these norms (e.g. you read stuff Eliezer writes, you sometimes talk to at least me and presumably other rationalists, you are sometimes at rationalist parties), and seem to be rejecting the claim that these norms are actually truth-promoting.

I certainly agree that we don't have the kind of evidence that could decisively settle the question to an outsider, and I think skepticism is reasonable. The main reason someone would be optimistic about the rationalists is by actually looking at and reasoning about rationalist discourse. You seem to have done this though, so I read your comment as a strong suggestion that this reasoning is not very weighty given the absence of a track record that might provide more decisive evidence.

Comment author: RobinHanson 28 November 2016 07:13:33PM 0 points [-]

Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The "rationalist community" is different in a great many ways from other communities of thought.

Comment author: paulfchristiano 28 November 2016 12:58:38AM *  8 points [-]

What kind of track record do you expect, and what other people are you comparing to? For example, are there academic communities for which you would grant the existence of such a track record, outside of the experimental sciences? For those communities, how would you respond to a comment like yours?

For example, I think that economists also have a set of norms for arriving at truer conclusions about society, but they also don't have an easy-to-point-to track record of success as a community.

If you think economists count, then the bay area rationalists will count simply by virtue of arriving at a set of views that mirror mainstream economic views much more closely than does the average US elite consensus. But realistically, I don't think that you can make the kind of case you are looking for for economists, and if you can then it will involve weakening the standards in a way that lets us make the same case for rationalists.

If you can't name any communities that have such a track record, then this seems like a weak test of whether a community's efforts to promote accurate conclusions are in name only. (Not necessarily a worthless one, but at least one that should be regarded with skepticism.)

I do think that e.g. bay area rationalists have substantially more accurate views about the topics they talk about then the world at large (on the future, AI, economics, politics, aid, cognitive science, etc.). This is largely driven by observing the rationalist views, using what I consider the best epistemic norms available, and finding the rationalist views to better accord with the output of that process. Make of that what you will.

Bay area rationalists appear to make better investments than average (dominated by very profitable bets on bitcoin, but also bets on AI/tech and a reliance on indices / skepticism of market returns), to work in higher paying jobs, to have views that more closely track traditionally recognized experts (which I expect to be more accurate than the median elite view), to make much more extensive quantitative predictions and in cases where comparisons are possible to have better predictive track records than pundits (though this is probably just due to being numerate, an issue that makes it basically impossible to compare quantitative track records to conventional elites).

In most cases, the rationalists' high intelligence and prevalence of mental dysfunction are going to have a larger effect on their thinking than the community's norms, so I don't think that pointing to a strong track record here is even going to be persuasive to you here---you will just (correctly) dismiss it by saying "but we need to compare the rationalists to other people who are similarly smart..."---unless we manage to find a control group that has similar levels of intelligence. And if we do find people with similar levels of intelligence then they will quite plausibly be doing better than rationalists on lots of conventional measures, and I will (correctly) dismiss this by saying "but we need to compare the rationalists to other people who have similar levels of other abilities..."

In general, I feel you should engage more with quantitative detail about the difficulty of establishing the kind of track record that would be persuasive. I have a similar complaint regarding fire-the-CEO markets or other scaled-up field experiments. It looks to me like it is going to take forever to make a compelling case if you are relying on track record rather than the theory (unless people are willing to trust short-term market movements, which (a) they mostly aren't, and (b) in that case it's nearly a tautology that fire-the-CEO markets work, and the empirical data is just showing you that nothing surprising goes wrong). Yes, you can take the line that someone else should publish a criticism along these lines, but if you actually want to get the idea to get adopted it falls to you to do at least a basic power analysis.

Similarly, you can take the line that the rationalists should be in the business of figuring out exactly what kinds of track record would be persuasive to someone with your perspective. But if you actually want to affect the rationalists' behavior, you would probably need to make some argument that the rationalists could stand to benefit by attempting to establish the kind of track record you are interested in, or that they should infer much from the non-existence of such a record, or something like that.

Comment author: RobinHanson 28 November 2016 01:53:01PM 4 points [-]

I said I haven't seen this community as exceptionally accurate, and you say that you have seen that, and called my view "uncharitable". I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn't mean that to be a criticism that you or others have not worked hard enough to create such a track record. Surely you can understand why outsiders might find suspect your standard of saying you think your community is more accurate because they more often agree with your beliefs.

Comment author: WhySpace 28 November 2016 12:40:47AM *  2 points [-]

Believe Less.

As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I'm interpreting correctly.

don't bother to hold beliefs on the kind of abstract topics

I've read this sentiment from you a couple times, and don't understand the motive. Have you written about it more in depth somewhere?

I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.

Comment author: RobinHanson 28 November 2016 12:50:54AM 1 point [-]

Yes believe fewer things and believe them less strongly. On abstract beliefs I'm not following you. The usual motive for most people is that they don't need most abstract beliefs to live their lives.

Comment author: paulfchristiano 27 November 2016 05:01:13PM 3 points [-]

This seems too uncharitable (I mean, "mostly" is kind of ambiguous in this context so it might be true, but...). I have plenty of complaints, and certainly things could be much better, but I think the rationalists in fact reward accuracy / high-quality reasoning much more than the surrounding community of bay area engineers, which itself rewards accuracy much more than US elite culture, which itself rewards accuracy much more than US culture more broadly.

For example, we do in fact put an unusual amount of stock on correct logical argument, sound probabilistic reasoning, and scientific inquiry, which do in fact tend to produce more accurate conclusions.

Comment author: RobinHanson 27 November 2016 07:57:10PM 4 points [-]

"charitable" seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don't have an accuracy track record of this group to refer to, right?

Comment author: RobinHanson 27 November 2016 06:31:46PM 20 points [-]

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

Comment author: paulfchristiano 27 November 2016 04:54:41PM 1 point [-]

No. 2 might be better thought of as "What my talk is optimized for."

I care much more about the fact that "my conscious thoughts are optimized for X" than "my talk is optimized for X," though I agree that it might be easier to figure out what our talk is optimized for.

if you want to make the two results more consistent, you want to move your talk closer to action

I'm not very interested in consistency per se. If we just changed my conscious thoughts to be in line with my type 1 preferences, that seems like it would be a terrible deal for my type 2 preferences.

As with bets, or other more concrete actions.

Sometimes bets can work and I make many more bets than most people, but quantitatively speaking I am skeptical of how much they can do (how large they have to be, on what range of topics they are realistic, what are the other attendant costs). Using conservative epistemic norms seems like it can accomplish much more.

If we want to tie social benefit to accuracy, it seems like it would be much more promising to use "the eventual output of conservative epistemic norms" as our gold standard rather than "what eventually happens," rather than reality, because it is available (a) much sooner, (b) with lower variance, and (c) on a much larger range of topics.

(An obvious problem with that is that it gives people larger motives to manipulate the output of the epistemic process. If you think people already have such incentives then it's not clear this is so bad.)

Comment author: RobinHanson 27 November 2016 06:23:11PM 2 points [-]

I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.

You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.

Comment author: sarahconstantin 27 November 2016 10:42:46AM 8 points [-]

So, on ways of smoothing the incentive gradient for high-quality reasoning:

This is a reason to have a "rationalist community." Humans are satisficers. We won't really care about the opinion of literally all 7 billion people on Earth if we have the approval of our own tribe. If our tribe has some norms about how conversation and thinking work, then we'll be pretty able to follow those norms, so long as we expect that our needs are meetable within the tribe -- that is, that it's a good place to find friends, mates, careers, etc.

It's also a reason to think about how UX affects discourse. I'm by no means an expert in this, but for instance, what does karma reward? what types of expression get attention? How can we offer rewards for behaviors we like?

Comment author: RobinHanson 27 November 2016 02:02:07PM 5 points [-]

That only helps if your "rationalist community" in fact pushes you to more accurate reasoning. Merely giving your community that name is far from sufficient however, and in my experience the "rationalist community" is mostly that in name only.

Comment author: RobinHanson 27 November 2016 02:00:25PM *  6 points [-]

"the most powerful tool is adopting epistemic norms which are appropriately conservative; to rely more on the scientific method, on well-formed arguments, on evidence that can be clearly articulated and reproduced, and so on."

A simple summary: Believe Less. Hold higher standards for what is sufficient reason to believe. Of course this is in fact what most people actually do. They don't bother to hold beliefs on the kind of abstract topics on which Paul wants to hold beliefs.

"1. What my decisions are optimized for. .. 2. What I consciously believe I want."

No. 2 might be better thought of as "What my talk is optimized for." Both systems are highly optimized. This way of seeing it emphasizes that if you want to make the two results more consistent, you want to move your talk closer to action. As with bets, or other more concrete actions.

Comment author: Manfred 20 January 2015 05:55:39AM *  7 points [-]

Funny timing! Or, good Baader-Meinhoffing :P

Although selfishness w.r.t. copies is a totally okay preference structure, rational agents (with a world-model like we have, and no preferences explicitly favoring conflict between copies) will want to precommit or self-modify so that their causal descendants will cooperate non-selfishly.

In fact, if there is a period where the copies don't have distinguishing indexical information that greatly uncorrelates their decision algorithm, copies will even do the precommitting themselves.

Therefore, upon waking up and learning that I am a copy, but before learning much more, I will attempt to sign a contract with a bystander stating that if I do not act altruistically towards my other copies who have signed similar contracts, I have to pay them my life savings.

Comment author: RobinHanson 20 January 2015 04:06:57PM 4 points [-]

If signing a contract was all that we needed to coordinate well, we would already be coordinating as much as is useful now. We already have good strong reasons to want to coordinate for mutual benefit.

Comment author: Capla 14 January 2015 04:00:37AM 0 points [-]

Is it feasible to make each "family" or "lineage" responsible for itself?

You can copy yourself as much as you want, but you are responsible for sustaining each copy?

Could we carry this further?: legally, no distinction is made between individuals and collections of copied individuals. It doesn't matter if you're one guy or a "family" of 30,000 people all copied (and perhaps subsequently modified) from the same individual: you only get one vote, and you're culpable if you commit a crime. How these collectives govern themselves is their own business, and even if it's dictatorial, you might argue that it's "fair" on the basis that copies made choices (before the split up) to dominate copies. If you're a slave in a dictatorial regime, it can only be because you're the sort of person who defects on prisoner dilemmas and seizes control when you can.

Maybe when some members become sufficiently different from the overall composition, they break off and become their own collective? Maybe this happens only at set times to prevent rampant copying to swamp elections?

Comment author: RobinHanson 14 January 2015 07:52:05PM 2 points [-]

Not only is this feasible, this is in fact the usual default situation in a simple market economy.

View more: Next