Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

What will rationality look like in the future?

7 Post author: DataPacRat 03 February 2012 01:28AM

One of the standard methods of science-fiction world-building is to take a current trend and extrapolate it into the future, and see what comes out. One trend I've observed is that over the last century or so, people have kept coming up with clever new ways to find answers to important questions - that is, developing new methods of rationality.


So, given what we do currently know about the overall shape of such methods, from Godel's Incompleteness Theory to Kolmogorov Complexity to the various ways to get around Prisoner's Dilemmas... Then, at least in a general science-fictional world-building sense, what might we be able to guess or say about what rationalists will be like in, oh, 50-100 years?

Comments (21)

Comment author: see 03 February 2012 02:59:35AM 11 points [-]

If a good rationalist could predict with reasonably high probability what methods good rationalists would use in 50-100 years, wouldn't said rationalist immediately update to use those methods now, invalidating his own prediction?

Comment author: JoshuaZ 03 February 2012 03:46:52AM 10 points [-]

Well, one could pick specific issues that one things we'll understood more. For example, we might have better understanding of certain cognitive biases, or may have better tactics for dealing with them. This is similar to how someone in 1955 could have made predictions about space travel even if they couldn't design a fully functioning spacecraft.

Comment author: [deleted] 03 February 2012 02:21:11PM *  7 points [-]

Not if those options are currently too computationally difficult to run. For instance, I'm currently considering the prediction "In the future, good rationalists will use today's rational methods of thinking, but they will use them faster and with more automation and computer assistance."

To give an example, imagine if a person currently posting on Less Wrong was much older, and was still posting about rationality. And that person had a little helper script that would interject into an argument you were going to make with "Is this part here an appeal to emotion?"

You could retranslate that into the concept "Thoroughly recheck all of your arguments to make sure you aren't making basic mistakes." and suggest that right now. It's good advice. I try to do it, but I don't do it enough, and I still miss things. I think AnnaSalamon pointed out that one thing she noticed from work writing that rationality curriculum is that she noticed she was doing that more often. So it's certainly an improvable skill.

But right now, (or even if that planned rationality curriculum works brilliantly) a rationalist would still have to reread posts or review thoughts and find those manually. It seems like this could be automated in the future, for at least some types of basic mistakes. I would not at all be surprised if some mistakes were harder to find than others. So in addition to spell check, and grammar check, in the future we might have fallacy check and/or bias check, with the same types of caveats and flaws that those automated checkers had had during their development.

Now that I've actually laid out the prediction, I do find it compelling, but that might just be because I can't see any obvious flaws in the passes that I made to recheck it, and there is a limited amount of time I have to review it before either the idea seems stale or I want to move on, or I feel like I have checked it enough and I haven't seen anything so I'm fairly confident it's accuracy would be too difficult to improve.

Edit: Corrected spelling. (After mentioning spell checkers and their caveats and flaws in my post, one of which I have just been reminded of is that they don't fix usernames.)

Comment author: roystgnr 04 February 2012 07:37:11PM 1 point [-]

Having a few very good rationalists applying "fallacy check" and "bias check" to all their own essays would be wonderful... but just imagine the implications of having many mediocre rationalists regularly applying "fallacy check" and "bias check" to their politicians essays and speeches.

I'd love to see what kind of feedback that provides to the politicians speechwriters. "Well, sir, we could say that, and it could give us a nice brief popularity boost, but would that be worth the blowback we get once everybody's talking about how we sent their fallacy-meters off the charts?"

Comment author: Eugine_Nier 05 February 2012 09:33:37PM 2 points [-]

but just imagine the implications of having many mediocre rationalists regularly applying "fallacy check" and "bias check" to their politicians essays and speeches.

Their ability to do this without getting mind-killed is very much open to question.

Comment author: TimS 03 February 2012 03:14:09AM 4 points [-]

There are lots of open social science-ish problems (e.g., optimal employee management, clinical psychology, effective political organizing, child raising). I expect that 50-100 years from now experts will have a much better grasp of the best responses to these problems, roughly in parallel to how experts have a better grasp of heart surgery than they did 50 years ago. Likewise, I expect public understanding of the solutions will be at the level of today's public understanding of heart surgery - the average reader of the New York Times knows the basics of what it is, why you'd do it, and has a very basic idea of problems that could arise (i.e. knows organ rejection is possible).

Comment author: Eugine_Nier 03 February 2012 03:57:52AM 7 points [-]

I'm not sure, attempts to solve social science-ish problems tend to get derailed by status signalling in ways that heart surgery does not.

Comment author: TimS 03 February 2012 04:10:06AM 9 points [-]

Even time I go to renew my vehicle registration or renew my driver's license, the facility is better streamlined. That's the result of social science research. I just think we'll keep getting better at it, so more and more will be accepted at the level of traditional medicine. That's not to say that mindkilling won't continue to be a huge risk in those fields.

Comment author: roystgnr 04 February 2012 07:27:38PM 2 points [-]

Even time I go to renew my vehicle registration or renew my driver's license, the facility is better streamlined. That's the result of social science research.

Are you sure? Bureaucratic record keeping is almost the most inherently computerizable, networkable, software-automatible task I can imagine, and as it happens we have been making some incredible strides in computers, networking, and software for the past few decades...

Comment author: TimS 04 February 2012 08:08:53PM 1 point [-]

The advances in queuing people efficiently are not a product of advancements in software or hardware. In other words, I think of the insights that led to the creation of Disney's Fastpass as social science advancements.

Comment author: scientism 03 February 2012 07:25:05PM 3 points [-]

I have this dream where you have a supercomputer and you feed it all the world's academic papers and so forth and using a set of heuristics it highlights all the parts of the documents that have markers for various confusions, biases, and errors, then it ranks the documents according to some sort of rationality index, and traces all the connections through citations, etc, to produce a complete map of rationality in the sciences. You can immediately see where the clearest thinking is being done, drill down to discover the most rational researchers and even see highlighted sentences that display biases, confusions, errors, etc. All without a hint of intelligence.

Comment author: Alex_Altair 04 February 2012 12:01:24AM 1 point [-]

I wish I had dreams that awesome and complicated.

Comment author: Wrongnesslessness 03 February 2012 05:48:35AM 3 points [-]

The powers of instrumental rationality in the context of rapid technological progress and the inability/unwillingness of irrational people to listen to rational arguments strongly suggest the following scenario:

After realizing that turning a significant portion of the general population into rationalists would take much more time and resources than simply taking over the world, rationalists will create a global corporation with the goal of saving the humankind from the clutches of zero- and negative-sum status games.

Shortly afterwards, the Rational Megacorp will indeed take over the world and the people will get good government for the first time in the history of the human race (and will live happily ever after).

Comment author: Normal_Anomaly 04 February 2012 03:14:23PM 4 points [-]

I find it very unlikely that this will happen, mostly due to a lack of sufficiently effective rationalists with an interest in taking over the world directly and the moral fiber to provide good government once they do so. But I think it would be awesome.

Comment author: hamnox 06 February 2012 01:11:56AM 0 points [-]

This sounds remarkably like my dream. But I figured that we'd take over some of the world, institute mandatory rationality training in that part, use our Nation of Rationalists to take over the rest of the world, and then go out and start colonizing space.

Comment author: atucker 03 February 2012 04:36:24PM 2 points [-]

We'll probably have dealt with that akrasia thing.

Comment author: billswift 03 February 2012 02:22:24PM 2 points [-]

Better computer support that encourages and allows more complete evaluation of alternatives.

Comment author: faul_sname 03 February 2012 01:52:01AM *  2 points [-]

Assuming no singularity/other game-changer?

Comment author: DataPacRat 03 February 2012 01:55:24AM 1 point [-]

Assuming no detectable singularity, anyway. Or, if you think one's inevitable, then feel free to consider just the time leading up to it.

Comment author: djcb 04 February 2012 03:16:09PM 1 point [-]

Interesting question... I'm sure with our BrainPals™ (as seen in John Scalzi's Old Man's War series) we can better quantify alternatives, as well as take more data into consideration. So, if someone on the street asks you for something, you "intuitively" sense that there's a 12% chance he wants to mug you, based on certain parameters. Of course, that's just improved applications of a known method.

Taking a step back, it's also interesting to see what will happen to rationalism in the general population -- are we becoming more rational over time? Or is it just something for a small group? I think that today the methods of rationality are at least available to more people (some of the smartest people in previous ages could have made good use of that!), but that doesn't mean humanity as a whole gets more rational.

Comment author: loveandthecoexistenc 04 February 2012 01:50:47AM 1 point [-]

If some specific projects (...Rationality Curriculum) become a success, rationality will be much more widespread and, as a result, much less defined.

And 50 years is a bit too much for any notable concentrations of probability.