Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
You may have seen that Numberphile video that circulated the social media world a few years ago. It showed the 'astounding' mathematical result:
1+2+3+4+5+… = -1/12
(quote: "the answer to this sum is, remarkably, minus a twelfth")
Then they tell you that this result is used in many areas of physics, and show you a page of a string theory textbook (oooo) that states it as a theorem.
The video caused quite an uproar at the time, since it was many people's first introduction to the rather outrageous idea and they had all sorts of very reasonable objections.
Here's the 'proof' from the video:
First, consider P = 1 - 1 + 1 - 1 + 1…
Clearly the value of P oscillates between 1 and 0 depending on how many terms you get. Numberphile decides that it equals 1/2, because that's halfway in the middle.
Alternatively, consider P+P with the terms interleaved, and check out this quirky arithmetic:
= 1 + (-1+1) + (1-1) … = 1, so 2P = 1, so P = 1/2
Now consider Q = 1-2+3-4+5…
And write out Q+Q this way:
+ 1 -2+3-4…
= 1-1+1-1+1 = 1/2 = 2Q, so Q = 1/4
Now consider S = 1+2+3+4+5...
Write S-4S as
- 4 -8 …
=1-2+3-4+5… = Q=1/4
So S-4S=-3S = 1/4, so S=-1/12
How do you feel about that? Probably amused but otherwise not very good, regardless of your level of math proficiency. But in another way it's really convincing - I mean, string theorists use it, by god. And, to quote the video, "these kinds of sums appear all over physics".
So the question is this: when you see a video or hear a proof like this, do you 'believe them'? Even if it's not your field, and not in your area of expertise, do you believe someone who tells you "even though you thought mathematics worked this way, it actually doesn't; it's still totally mystical and insane results are lurking just around the corner if you know where to look"? What if they tell you string theorists use it, and it appears all over physics?
I imagine this is as a sort of rationality litmus test. See how you react to the video or the proof (or remember how you reacted when you initially heard this argument). Is it the 'rational response'? How do you weigh your own intuitions vs a convincing argument from authority plus math that seems to somehow work, if you turn your head a bit?
If you don't believe them, what does that feel like? How confident are you?
It's totally true that, as an everyday rationalist (or even as a scientist or mathematician or theorist), there will always be computational conclusions that are out of your reach to verify. You pretty much have to believe theoretical physicists who tell you "the Standard Model of particle physics accurately models reality and predicts basically everything we see at the subatomic scale with unerring accuracy"; you're likely in no position to argue.
But - and this is the point - it's highly unlikely that all of your tools are lies, even if experts say so, and you ought to require extraordinary evidence to be convinced that they are. It's not enough that someone out there can contrive a plausible-sounding argument that you don't know how to refute. If your tools are logically sound and their claims don't fit into that logic.
(On the other hand, if you believe something because you heard it was a good idea from one expert, and then another expert tells you a different idea, take your pick; there's no way to tell. It's the personal experience that makes this example lead to sanity-questioning, and that's where the problem lies.)
In my (non-expert but well-informed) view, the correct response to this argument is to say "no, I don't believe you", and hold your ground. Because the claim made in the video is so absurd that, even if you believe the video is correct and made by experts and the string theory textbook actually says that, you should consider a wide range of other explanations as to "how it could have come to be that people are claiming this" before accepting that addition might work in such an unlikely way.
Not because you know about how infinite sums work better than a physicist or mathematician does, but because you know how mundane addition works just as well as they do, and if a conclusion this shattering to your model comes around -- even to a layperson's model of how addition works, that adding positive numbers to positive numbers results in bigger numbers --, then either "everything is broken" or "I'm going insane" or (and this is by far the theory that Occam's Razor should prefer) "they and I are somehow talking about different things".
That is, the unreasonable mathematical result is because the mathematician or physicist is talking about one "sense" of addition, but it's not the same one that you're using when you do everyday sums or when you apply your intuitions about intuition to everyday life. This is by far the simplest explanation: addition works just how you thought it does, even in your inexpertise; you and the mathematician are just talking past each other somehow, and you don't have to know what way that is to be pretty sure that it's happening. Anyway, there's no reason expert mathematicians can't be amateur communicators, and even that is a much more palatable result than what they're claiming.
(As it happens, my view is that any trained mathematician who claims that 1+2+3+4+5… = -1/12 without qualification is so incredibly confused or poor at communicating or actually just misanthropic that they ought to be, er, sent to a re-education camp.)
So, is this what you came up with? Did your rationality win out in the face of fallacious authority?
(Also, do you agree that I've represented the 'rational approach' to this situation correctly? Give me feedback!)
Postscript: the explanation of the proof
It turns out that there is a sense in which those summations are valid, but it's not the sense you're using when you perform ordinary addition. It's also true that the summations emerge in physics. It is also true that the validity of these summations is in spite of the rules of "you can't add, subtract, or otherwise deal with infinities, and yes all these sums diverge" that you learn in introductory calculus; it turns out that those rules are also elementary and there are ways around them but you have to be very rigorous to get them right.
An elementary explanation of what happened in the proof is that, in all three infinite sum cases, it is possible to interpret the infinite sum as a more accurate form (but STILL not precise enough to use for regular arithmetic, because infinities are very much not valid, still, we're serious):
S(infinity) = 1+2+3+4+5… ≈ -1/12 + O(infinity)
Where S(n) is a function giving the n'th partial sum of the series, and S(infinity) is an analytic continuation (basically, theoretical extension) of the function to infinity. (The part at the end means "something on the order of infinity")
Point is, that O(infinity) bit hangs around, but doesn't really disrupt math on the finite part, which is why algebraic manipulations still seem to work. (Another cute fact: the curve that fits the partial sum function also non-coincidentally takes the value -1/12 at n=0.)
And it's true that this series always associates with the finite part -1/12; even though there are some manipulations that can get you to other values, there's a list of 'valid' manipulations that constrains it. (Well, there are other kinds of summations that I don't remember that might get different results. But this value is not accidentally associated with this summation.)
And the fact that the series emerges in physics is complicated but amounts to the fact that, in the particular way we've glued math onto physical reality, we've constructed a structure that also doesn't care about the infinity term (it's rejected as "nonphysical"!), and so we get the right answer despite dubious math. But physicists are fine with that, because it seems to be working and they don't know a better way to do it yet.
I've just come back from the latest post on revitalizing LW as a conversational locus in the larger Rational-Sphere community and I'm personally still very into the idea. This post is directed at you if you're also into the idea. If you're not, that's okay; I'd still like to give it a try.
A number of people in the comments mentioned that the Discussion forum mostly gets Link posts, these days, and that those aren't particularly rewarding. But there's also not a lot of people investing time in making quality text posts; certainly nothing like the 'old days'.
This also means that the volume of text posts is low enough that writing one (to me) feels like speaking up in a quiet room -- sort of embarrassingly ostentatious, amplified by the fact that without an 'ongoing conversation' it's hard to know what would be a good idea to speak up about. Some things aren't socially acceptable here (politics, social justice?); some things feel like they've been done so many times that there's not much useful to say (It feels hard to have anything novel to say about, say, increasing one's productivity, without some serious research.)
(I know the answer is probably 'post about anything you want', but it feels much easier to actually do that if there's some guidance or requests.)
So, here's the question: what would you like to see posts about?
I'm personally probably equipped to write about ideas in math, physics, and computer science, so if there are requests in those areas I might be able to help (I have some ideas that I'm stewing, also). I'm not sure what math level to write at, though, since there's no recent history of mathematically technical posts. Is it better to target "people who probably took some math in college but always wished they knew more?" or better to just be technical and risk missing lots of people?
My personal requests:
1. I really value surveys of subjects or subfields. They provide a lot of knowledge and understanding for little time invested, as a reader, and I suspect that as overviews are relatively easy to create as a writer since they don't have to go deep into details. Since they explain existing ideas instead of introduce new ones they're easier and less stressful to get right. If you have a subject you feel like you broadly understand the landscape of, I'd encourage you to write out a quick picture of it.
For instance, u/JacobLiechty posted about "Keganism" in the thread I linked at the top of the post, and I don't know what that is but it sounds interconnected to many other things. But in many cases I can only learn so much by *going and reading the relevant material*, especially on philosophical ideas. What's more important is how it fits into ongoing conversations, or political groups, or social groups, or whatever. There's no efficient way for me to learn to understand the landscape of discussion around a concept that compares to having someone just explain it.
(I'll probably volunteer to do this in the near future for a couple of fields I pay attention to.)
It's also (in my opinion) *totally okay* to do a mediocre job with these, especially if others can help fill in the gaps in the comments. Much better to try. A mostly-correct survey is still super useful compared to none at all. They don't have to be just academic subjects, either. I found u/gjm's explanation of what 'postrationalism' refers to in the aforementioned thread very useful, because it put a lot of mentions of the subject into a framework that I didn't have in place already -- and that was just describing a social phenomenon in the blog-sphere.
2. I've seen expressed by others a desire to see more material about instrumental rationality, that is, implementing rationality 'IRL' in order to achieve goals. These can be general mindsets or ways of looking at the world, or in-the-moment techniques you can exercise (and ideally, practice). (Example) If you've got personal anecdotes about successes (or failures) at implementing rational decision-making in real life, I'm certain that we'd like to hear about them.
(Cross-Posted from my blog.)
You know roughly what a fighting style is, right? A set of heuristics, skills, patterns made rote for trying to steer a fight into the places where your skills are useful, means of categorizing things to get a subset of the vast overload of information available to you to make the decisions you need, tendencies to prioritize certain kinds of opportunities, that fit together.
It's distinct from why you would fight.
Optimizing styles are distinct from what you value.
Here are some examples:
- "Move fast and break things."
- "Move fast with stable infra."
- "Fail Fast."
- "Before all else, understand the problem."
- "Dive in!"
- "Don't do hard things. Turn hard things to easy things, then do easy things."
- The "Yin and Yang" of rationality.
- The Sorting Hat Chats's secondary house system.
- "Start with what you can test confidently, and work from there. Optimize the near before the far. If the far and uncertain promises to be much bigger, it's probably out of reach."
- "Start with the obviously most important thing, and work backwards from there."
- "Do the best thing."
- "The future is stable. Make long-term plans."
- "The future is unstable. Prioritize the imminent because you know it's real."
- "Win with the sheathed sword."
In limited optimization domains like games, there is known to be a one true style. The style that is everything. The null style. Raw "what is available and how can I exploit it", with no preferred way for the game to play out. Like Scathach's fighting style.
If you know probability and decision theory, you'll know there is a one true style for optimization in general too. All the other ways are fragments of it, and they derive their power from the degree to which they approximate it.
Don't think this means it is irrational to favor an optimization style besides the null style. The ideal agent, may use the null style, but the ideal agent doesn't have skill or non-skill at things. As a bounded agent, you must take into account skill as a resource. And even if you've gained skills for irrational reasons, those are the resources you have.
Don't think that since one of the optimization styles you feel motivated to use is explicit in the way it tries to be the one true style, that it is the one true style.
It is very very easy to leave something crucial out of your explicitly-thought-out optimization.
Hour for hour, one of the most valuable things I've ever done was "wasting my time" watching a bunch of videos on the internet because I wanted to. The specific videos I wanted to watch were from the YouTube atheist community of old. "Pwned" videos, the vlogging equivalent of fisking. Debates over theism with Richard Dawkins and Christopher Hitchens. Very adversarial, not much of people trying to improve their own world-model through arguing. But I was fascinated. Eventually I came to notice how many of the arguments of my side were terrible. And I gravitated towards vloggers who made less terrible arguments. This lead to me watching a lot of philosophy videos. And getting into philosophy of ethics. My pickiness about arguments grew. I began talking about ethical philosophy with all my friends. I wanted to know what everyone would do in the trolley problem. This led to me becoming a vegetarian, then a vegan. Then reading a forum about utilitarian philosophy led me to find the LessWrong sequences, and the most important problem in the world.
It's not luck that this happened. When you have certain values and aptitudes, it's a predictable consequence of following long enough the joy of knowing something that feels like it deeply matters, that few other people know, the shocking novelty of "how is everyone so wrong?", the satisfying clarity of actually knowing why something is true or false with your own power, the intriguing dissonance of moral dilemmas and paradoxes...
It wasn't just curiosity as a pure detached value, predictably having a side effect good for my other values either. My curiosity steered me toward knowledge that felt like it mattered to me.
It turns out the optimal move was in fact "learn things". Specifically, "learn how to think better". And watching all those "Pwned" videos and following my curiosity from there was a way (for me) to actually do that, far better than lib arts classes in college.
I was not wise enough to calculate explicitly the value of learning to think better. And if I had calculated that, I probably would have come up with a worse way to accomplish it than just "train your argument discrimination on a bunch of actual arguments of steadily increasing refinement". Non-explicit optimizing style subagent for the win.
Arrogance is an interesting topic.
Let's imagine we have two people who are having a conversation. One of them is an professor in quantum mechanics and the other person is an enthusiast who has read a few popular science articles online.
The professor always gives his honest opinion, but in an extremely blunt manner, not holding anything back and not making any attempts to phrase it politely. That is, the professor does not merely tell the enthusiast that they are wrong, but also provides his honest assessment that the enthusiast does possess even a basic understanding of the core concepts of quantum mechanics.
The enthusiast is polite throughout, even when subject to this criticism. They respond to the professors objections about their viewpoints, to the best of their ability throughout, trying their best to engage directly with the professors arguments. At the same time, the enthusiast is convinced that he is correct - equally convinced as the professor in fact - but he does not vocalise this in the same way as the professor.
Who is the most arrogant in these circumstances? Is this even a useful question to ask - or should we be dividing arrogance into two components - over-confidence and dismissive behaviour?
Let's imagine the same conversation, but imagine that the enthusiast does not know that the professor is a professor and neither do the bystanders. The bystanders don't have a knowledge of quantum physics - they can't tell who is the professor and who is the enthusiast since both appear to be able to talk fluently about the topics. All they can see is that one person is incredibly blunt and dismissive, while the other person is perfectly polite and engages with all the arguments raised. Who would the bystanders see as most arrogant?
[I'd previously posted this essay as a link. From now on, I'll be cross-posting blog posts here instead of linking them, to keep the discussions LW central. This is the first in an in-progress of sequence of articles that'll focus on identifying instrumental rationality techniques and cataloging my attempt to integrate them into my life with examples and insight from habit research.]
[Epistemic Status: Pretty sure. The stuff on habits being situation-response links seems fairly robust. I'll be writing something later with the actual research. I'm basically just retooling existing theory into an optimizational framework for improving life.]
I’m interested how rationality can help us make better decisions.
Many of these decisions seem to involve split-second choices where it’s hard to sit down and search a handbook for the relevant bits of information—you want to quickly react in the correct way, else the moment passes and you’ve lost. On a very general level, it seems to be about reacting in the right way once the situation provides a cue.
Consider these situation-reaction pairs:
- · You are having an argument with someone. As you begin to notice the signs of yourself getting heated, you remember to calm down and talk civilly. Maybe also some deep breaths.
- · You are giving yourself a deadline or making a schedule for a task, and you write down the time you expect to finish. Quickly, though, you remember to actually check if it took you that long last time, and you adjust accordingly.
- · You feel yourself slipping towards doing something some part of you doesn’t want to do. Say you are reneging on a previous commitment. As you give in to temptation, you remember to pause and really let the two sides of yourself communicate.
- · You think about doing something, but you feel aversive / flinch-y to it. As you shy away from the mental pain, rather than just quickly thinking about something else, you also feel curious as to why you feel that way. You query your brain and try to pick apart the “ugh” feeling,
Two things seem key to the above scenarios:
One, each situation above involves taking an action that is different from our keyed-in defaults.
Two, the situation-reaction pair paradigm is pretty much CFAR’s Trigger Action Plan (TAP) model, paired with a multi-step plan.
Also, knowing about biases isn’t enough to make good decisions. Even memorizing a mantra like “Notice signs of aversion and query them!” probably isn’t going to be clear enough to be translated into something actionable. It sounds nice enough on the conceptual level, but when, in the moment, you remember such a mantra, you still need to figure out how to “notice signs of aversion and query them”.
What we want is a series of explicit steps that turn the abstract mantra into small, actionable steps. Then, we want to quickly deploy the steps at the first sign of the situation we’re looking out for, like a new cached response.
This looks like a problem that a combination of focused habit-building and a breakdown of the 5-second level can help solve.
In short, the goal looks to be to combine triggers with clear algorithms to quickly optimize in the moment. Reference class information from habit studies can also help give good estimates on how long the whole process will take to internalize (on average 66 days, according to Lally et al)
But these Trigger Action Plan-type plans don’t seem to directly cover the willpower related problems with akrasia.
Sure, TAPs can help alert you to the presence of an internal problem, like in the above example where you notice aversion. And the actual internal conversation can probably be operationalized to some extent, like how CFAR has described the process of Double Crux.
But most of the Overriding Default Habit actions seem to be ones I’d be happy to do anytime—I just need a reminder—whereas akrasia-related problems are centrally related to me trying to debug my motivational system. For that reason, I think it helps to separate the two. Also, it makes the outside-seeming TAP algorithms complementary, rather than at odds, with the inside-seeming internal debugging techniques.
Loosely speaking, then, I think it still makes quite a bit of sense to divide the things rationality helps with into two categories:
- Overriding Default Habits:
These are the situation-reaction pairs I’ve covered above. Here, you’re substituting a modified action instead of your “default action”. But the cue serves as mainly a reminder/trigger. It’s less about diagnosing internal disagreement.
- Akrasia / Willpower Problems:
Here we’re talking about problems that might require you to precommit (although precommitment might not be all you need to do), perhaps because of decision instability. The “action-intention gap” caused by akrasia, where you (sort of) want to something but you don’t want to also goes in here.
Still, it’s easy to point to lots of other things that fall in the bounds of rationality that my approach doesn’t cover: epistemology, meta-levels, VNM rationality, and many other concepts are conspicuously absent. Part of this is because I’ve been focusing on instrumental rationality, while a lot of those ideas are more in the epistemic camp.
Ideas like meta-levels do seem to have some place in informing other ideas and skills. Even as declarative knowledge, they do chain together in a way that results in useful real world heuristics. Meta-levels, for example, can help you keep track of the ultimate direction in a conversation. Then, it can help you table conversations that don’t seem immediately useful/relevant and not get sucked into the object-level discussion.
At some point, useful information about how the world works should actually help you make better decisions in the real world. For an especially pragmatic approach, it may be useful to ask yourself, each time you learn something new, “What do I see myself doing as a result of learning this information?”
There’s definitely more to mine from the related fields of learning theory, habits, and debiasing, but I think I’ll have more than enough skills to practice if I just focus on the immediately practical ones.
About a month ago, Anna posted about the Importance of Less Wrong or Another Single Conversational Locus, followed shortly by Sarah Constantin's http://lesswrong.com/lw/o62/a_return_to_discussion/
There was a week or two of heavy-activity by some old timers. Since there's been a decent array of good posts but not quite as inspiring as the first week was and I don't know whether to think "we just need to try harder" or change tactics in some way.
- I do feel it's been better to quickly be able to see a lot of posts in the community in one place
- I don't think the quality of the comments is that good, which is a bit demotivating.
- on facebook, lots of great conversations happen in a low-friction way, and when someone starts being annoying, the person's who's facebook wall it is has the authority to delete comments with abandon, which I think is helpful.
- I could see the solution being to either continue trying to incentivize better LW comments, or to just have LW be "single locus for big important ideas, but discussion to flesh them out still happen in more casual environments"
- I'm frustrated that the intellectual projects on Less Wrong are largely silo'd from the Effective Altruism community, which I think could really use them.
- The Main RSS feed has a lot of subscribers (I think I recall "about 10k"), so having things posted there seems good.
- I think it's good to NOT have people automatically post things there, since that produced a lot of weird anxiety/tension on "is my post good enough for main? I dunno!"
- But, there's also not a clear path to get something promoted to Main, or a sense of which things are important enough for Main
- I notice that I (personally) feel an ugh response to link posts and don't like being taken away from LW when I'm browsing LW. I'm not sure why.
Curious if others have thoughts.
(This is a crossposted FB post, so it might read a bit weird)
My goal this year (in particular, my main focus once I arrive in the Bay, but also my focus in NY and online in the meanwhile), is to join and champion the growing cause of people trying to fix some systemic problems in EA and Rationalsphere relating to "lack of Hufflepuff virtue".
I want Hufflepuff Virtue to feel exciting and important, because it is, and I want it to be something that flows naturally into our pursuit of both epistemic integrity, intellectual creativity, and concrete action.
Some concrete examples:
- on the 5 second reflex level, notice when people need help or when things need doing, and do those things.
- have an integrated understanding that being kind to people is *part* of helping them (and you!) to learn more, and have better ideas.
(There are a bunch of ways to be kind to people that do NOT do this, i.e. politely agreeing to disagree. That's not what I'm talking about. We need to hold each other to higher standards but not talk down to people in a fashion that gets in the way of understanding. There are tradeoffs and I'm not sure of the best approach but there's a lot of room for improvement)
- be excited and willing to be the person doing the grunt work to make something happen
- foster a sense that the community encourages people to try new events, actively take personal responsibility to notice and fix community-wide problems that aren't necessarily sexy.
- when starting new projects, try to have mentorship and teamwork built into their ethos from the get-go, rather than hastily tacked on later
I want these sorts of things to come easily to mind when the future people of 2019 think about the rationality community, and have them feel like central examples of the community rather than things that we talk about wanting-more-of.
View more: Next