Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 23 January 2017 10:19:22AM *  0 points [-]

I don't know if it is possible, but could you explain Dirichlet's theorem on arithmetic progressions and Green–Tao theorem to someone with, uhm, good knowledge of high-school math, but not much beyond that?

In general I wonder how can anything be proved about prime numbers (other than the fact that they are infinitely many), because they seem to appear quite randomly.

EDIT: I will accept if the inferential distance is simply too large. I am just hoping that maybe it isn't.

Comment author: Viliam 23 January 2017 09:51:16AM 0 points [-]

In particular, rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”. Postrationality, on the other hand, says “actually, intuitions and feelings are really important, let’s see if we can work with them instead of against them”.

Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.

This would make me a post-rationalist, too.

Postrationalists don’t think that death, suffering, and the forces of nature are cosmic evils that need to be destroyed.

Postrationalists enjoy surrealist art and fiction.

This wouldn't.

I guess the second part is more important, because the first part is mostly a strawman.

Comment author: MrMind 23 January 2017 09:44:38AM 0 points [-]

Exactly. Even if I can debug the internal process, that doesn't stop it from happening.

Comment author: MrMind 23 January 2017 09:43:39AM 0 points [-]

The skill to try to build is understanding what your nonverbal parts actually want

Isn't that the easy part? Just look at what it's doing: if I'm eating a bag of chips instead of working out than it means that my non-verbal part wants to eat a bag of chips. Or there's something else?

Comment author: Viliam 23 January 2017 09:41:03AM *  0 points [-]

How do you become a rationalist political being if you aren't able to practice rationalist politics in the supportive company of other rationalists?

I don't think LW qualifies as a sufficiently supportive company of rationalists for at least two major reasons: (1) Eugine and his army of sockpuppets, (2) anyone can join, rationalist or not, and talking about politics would most likely attract the wrong kind people, so even if LW would qualify as a sufficiently supportive company of rationalists now, that could easily change overnight.

I imagine that if we could solve the problem of sockpuppets and/or create a system of "trusted users" who could moderate the debate, we would have a chance to debate politics rationally. But I suspect that a rational political debate would be quite boring for most people.

To give an example of "boring politics", when Trump was elected, half people on internet were posting messages like "that's great, now Americal will be great again", half people on internet were posting messages like "that's horrible, now racists and sexist will be everywhere, and we are all doomed"... and there was a tiny group of people posting messages like "having Trump elected increased value of funds in sectors A, B, C, and decreased value of funds in sectors X, Y, Z, so by hedging against this outcome I made N% money". You didn't have to tell these people that rationalists are supposed to bet on their beliefs, because they already did.

Comment author: whpearson 23 January 2017 09:39:38AM 0 points [-]

I didn't know what "shared art" meant in the initial post, and I still don't.

So the art of rationality are techniques that we share to help each other "win" in our contexts. The thrust of my argument has been that I think rationality is a two place word. That you need a defined context to be able to talk about what "wins". Why? Results like there is no such thing as a free lunch. If you point me at AIXI as optimal I'll point out that it only says that there is no better algorithm over all problems, but that that is consistent with there being lots of other equally bad algorithms.

If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in 'fitness' by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.

This would only be by definition. Which I don't think is a necessarily a mathematically sensible definition (all the problems in the world might have sufficient shared context).

Comment author: MrMind 23 January 2017 09:38:15AM *  0 points [-]

Infinities are an interesting case for rationality. On one side, they are totally made up: we do not have any example of an infinite quantity in our experience. On the other side, they seem to have a quality of coherence and persistence that is independent from our mind, in contrast with other kinds of fiction. They are complex and unintuitive, and it's a problem that shines a light on the fact that even amongst the experts there are different degrees of expertise.
A set theorist migh reply that due to the Transfinite Recursion Theorem, that summation really sums to omega, not any finite value, and that the only way to create a coherent infinite function is to specify a limit step: but someone who is not versed in mathematics will believe what a nice and lively professor seen on Youtube will say. It is unfortunate that seen from 'below', all experts seem alike, and there are few ways to discern someone who is treading out of their area of expertise.

Comment author: Viliam 23 January 2017 09:18:13AM 0 points [-]

The best leaders are those their people hardly know exist.
The next best is a leader who is loved and praised.
Next comes the one who is feared.
The worst one is the leader that is despised

-- Tao Te Ching

Comment author: Viliam 23 January 2017 09:13:50AM *  0 points [-]

How about doing a public beta for a month or two, with a warning that afterwards everything posted on the new server will be deleted (including new user accounts, etc.), data from old server will be imported, the old server will become read-only, and the new server will become the official one.

Comment author: Thomas 23 January 2017 08:42:09AM 0 points [-]
Comment author: ArisC 23 January 2017 08:39:15AM 0 points [-]

Thanks for your response!

First, re the suitability of (b) as a general criterion: if your theory rests on arbitrary principles, then you admit that it's nothing more than a subjective guide... so then what's the point of trying to argue for it? If at the end of the day it all comes down to personal preference, you might as well give up on the discussion, no?

With regards to liberty meeting that criterion, it is at least a fact on which everyone can agree that not everyone agrees on an absolute moral authority. So starting from this fact, we can derive the principle that nothing gives you the right to infringe on other people's liberty. This doesn't exactly presuppose a "fairness" principle - it's sort of like bootstrapping: it just presupposes the absence of a right to harm others. I am not saying that not being violent is right; I am saying that being violent isn't.

Your point on the fact that this theory leaves a lot of moral dilemmas uncovered, you are right. Sadly, I don't have an answer to that. Perhaps I could add a 4th criterion, to do with completeness, but I suspect that no moral theory would meet all of the criteria. But to be clear here - you are not rejecting criterion a as far as I can tell; you are just saying it's not sufficient, right?

As for your personal principle - I cannot say whether it meets criteria a and c because you have not provided enough details, e.g. how do you balance justice vs honesty vs liberty? If what you are saying is "it all comes down to the particular situation", then you are not describing a moral theory but personal judgement.

But I appreciate the critique - my arguing back isn't me blindly rejecting any counter-arguments!

Comment author: Fluttershy 23 January 2017 08:29:23AM 0 points [-]

Let's find out how contentious a few claims about status are.

  1. Lowering your status can be simultaneously cooperative and self-beneficial.

    Strongly Disagree Strongly Agree

  2. Conditional on status games being zero-sum in terms of status, it’s possible/common for the people participating in or affected by a status game to end up much happier or much worse off, on average, than they were before the status game.

    Strongly Disagree Strongly Agree

  3. Instinctive trust of high status people regularly obstructs epistemic cleanliness outside of the EA and rationalist communities.

    Strongly Disagree Strongly Agree

  4. Instinctive trust of high status people regularly obstructs epistemic cleanliness within the EA and rationalist communities.

    Strongly Disagree Strongly Agree

Submitting...

Comment author: TiffanyAching 23 January 2017 07:45:10AM 0 points [-]

Hi ArisC! Gratz on your first post. A few thoughts:

I can't agree with your b) criterion - non-arbitrary. The fundamental principle has to be arbitrary or you end up in a turtles-all-the-way-down situation where each principle rest upon another. "The fundamental principle is to not infringe on the liberty of others". Why not? "Because everyone agrees there's no way to prove moral authority". No they don't. Billions don't. "Well they should, because it's true." Well so what if it is? "That means you have no right to impose moral authority on anyone" What's this "no right" of which you speak, what does that mean?

This "no-one has the right" statement surely implies the existence of another principle - "it is right to be just/fair, it is wrong to be unjust/unfair". Having the right to something means having it fairly. If "don't infringe on personal liberty" is not based upon any other principle, then it is itself arbitrary. If it is based upon an ideal of "don't do unjust things, (such as assuming moral authority)" then you've got yourself another, even deeper principle. And that could cause some issues with your a) criterion, consistency, because it's possible to imagine scenarios where "injustice is wrong" and "interfering with personal liberty is wrong" are in conflict - in fact we deal with those scenarios every day in the real world. And speaking of the consistency criterion:

if a theory consists of a number of principles that contradict each other, there will be situations where the theory will suggest contradictory actions - hence failing its purpose as a tool to enable choice making.

Surely a moral system fails of its purpose as "a tool for choice-making" if its comprising principles - or principle, in the libertarianism case - won't actually cover a whole range of moral-choice scenarios? To pick an example at random, imagine an honesty-based payment system for an online product. The site says "please pay whatever you think this is worth". You happen to know that the site needs $5 per customer to make the business profitable. You actually believe the value of the product to be $10. How much do you pay? Or take the old Trolley Problem, where you have choice between allowing five kids to die by inaction vs. killing one through your own act. I don't see how "do not infringe on other people's liberty" is a useful tool for making either of those choices without really stretching the definitions of "infringe" and "liberty" to breaking point. "Don't infringe on people's liberty" can only inform choices where someone's liberty is at stake - to re-frame all moral decisions as centering on someone's "liberty" would, again, seem to me to require torturing the definition of liberty.

Now I know this isn't answering your question about moral systems that meet your criteria but all I can say to that is that I don't accept your first two criteria at all. The first I've discussed. As for the second, I think that the basic idea of authority - the designation of certain individuals as rule-makers and rule-enforcers by group consensus - is justifiable. It's part of my moral system.

My bedrock principle is "survival of the human species". It is arbitrary - why care about the survival of humanity? - but it is also based in reality. We have basic biological urges to survive, to procreate (most of us) and to nurture our offspring so that they also survive and procreate. Most of us want the species to keep going. I do. So that's where I start. We have to live with each other as individuals to survive as a species. That's the second level, and I think that's also clearly based in fact. And from there a whole slew of tertiary principles arise based on what makes it possible for us to live and co-operate with each other. Justice, honesty, value for life, mutual tolerance and yes, personal liberty too. They are not "consistent" in the way I understand you to use the word, because they have to be balanced against each other in any given situation to achieve the goal - survival of the species. They do, as far as I can see, lead when taken to their logical extent to a society that is not dystopian - not perfect, but pretty functional. Optimal balancing is something we've been arguing about for millennia but we've done well enough so far that we are still here, talking about morality on the internet.

Comment author: jam_brand 23 January 2017 07:41:39AM 0 points [-]

FWLIW, I took "I've never heard of metatroll either, but I won't hold that against them :)" as intended to have a net-deëscalatory effect, even if it didn't seem to be entirely subtext-free. (and this combination of attributes is not something I have a problem with)

Comment author: Thomas 23 January 2017 07:14:00AM 0 points [-]

0.999... is the limit of 9/10+9/100+9/1000+...

.,,9990 is what?

Comment author: lifelonglearner 23 January 2017 05:47:01AM 0 points [-]

Comment thread

Comment author: lifelonglearner 23 January 2017 05:45:43AM *  0 points [-]

How frequently do you talk to yourself? (As opposed to Gunnar's talk to others question)

How good do you think you are at introspection? (i.e. debugging internal "ugh"s, figuring out where your feelings are coming from, etc.) (1-10 scale.

  • 1 = "I'm never really sure what's happening inside me. I just do things."

  • 5 = "I can tell where some feelings come from." or "I can sorta understand how I operate."

  • 10 = "I can solve internal problems really effectively" or "I have really good explicit mental models of how my own mind works."

Submitting...

Comment author: lifelonglearner 23 January 2017 05:37:46AM 0 points [-]

Hm, I think that might actually just be good for seeing how prevalent this is, esp. around here.

I suspect that a high frequency of talking to yourself or high quality of internal conversations is strongly associated with good introspection or focusing (in the Gendlin sense).

Comment author: TiffanyAching 23 January 2017 04:43:30AM 0 points [-]

Of all the different explanations and interpretations people have been giving in this thread this is the most satisfying to my mathematically illiterate brain. It's troublesome for me to grasp how 0.999... isn't always just a bit smaller than 1 because my brain wants to think that even an infinitely tiny difference is still a difference. But when you put it like that - there's nowhere between the two where you can draw a line between them - it seems to click in. 0.999... hugs 1 so tight that you can't meaningfully separate them.

Comment author: Kindly 23 January 2017 04:31:43AM 1 point [-]

I think that in the interests of being fair to the creators of the video, you should link to http://www.nottingham.ac.uk/~ppzap4/response.html, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.

In particular, let me quote the final paragraph:

There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them debating divergent sums in internet forums and in the office. That cannot be a bad thing and I'm sure the simplicity of the presentation contributed enormously to that. In fact, if I may return to the original question, "what do we get if we sum the natural numbers?", I think another answer might be the following: we get people talking about Mathematics.

In light of this paragraph, I think a cynical answer to the litmus test is this. Faced with such a ridiculous claim, it's wrong to engage with it only on the subject level, where your options are "Yes, I will accept this mathematical fact, even though I don't understand it" or "No, I will not accept this fact, because it flies in the face of everything I know." Instead, you have to at least consider the goals of the person making the claim. Why are they saying something that seems obviously false? What reaction are they hoping to get?

Comment author: TiffanyAching 23 January 2017 03:58:27AM 0 points [-]

I would think that for the purposes of the poll that doesn't count, because it's more a "guided thinking" thing - you're helping yourself to organize your thoughts by framing your problem as an imaginary dialogue. I do it too, with mixed results (I sometimes just end up scolding myself which I don't think is particularly constructive). But I would think it's qualitatively different to an actual dialogue with another mind which has at least the potential to introduce solutions or perspectives that you would not have come up with on your own. Maybe you should create a similar poll to see how many people talk to themselves and whether it helps!

Comment author: shev 23 January 2017 01:20:46AM 1 point [-]

You had written

"I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with "rationality is about winning"."

And I'm saying that LW is about rationality, and rationality is how you optimally do things, and truth-seeking is a side effect. And the truth-seeking stuff in the rationality community that you like is because "a community about rationality" is naturally compelled to participate in truth-seeking, because it's useful and interesting to rationalists. But truth-seeking isn't inherently what rationality is.

Rationality is conceptually related to fitness. That is, "making optimal plays" should be equivalent to maximizing fitness within one's physical parameters. More rational creatures are going to be more fit than less rational ones, assuming no other tradeoffs.

It's irrelevant that creatures survive without being rational. Evolution is a statistical phenomenon and has nothing to do with it. If they were more rational, they'd survive better. Hence rationality is related to fitness with all physical variables kept the same. If it cost them resources to be more rational, maybe they wouldn't survive better, but that wouldn't be keeping the physical variables the same so it's not interesting to point that out.

If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in 'fitness' by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.

I didn't know what "shared art" meant in the initial post, and I still don't.

Comment author: username2 23 January 2017 01:16:37AM 0 points [-]

I'm having trouble parsing the intended meaning. Can you clarify?

Comment author: gjm 23 January 2017 12:41:16AM *  0 points [-]

I think there's something wrong with your analysis of the longer/shorter survey data.

[EDITED to add:] ... and, having written this and gone back to read the comments on your post, I see that someone there has already said almost exactly the same as I'm saying here. Oh well.

You start out by saying that you should write longer posts if 25% more readers prefer long than prefer short (and similarly for writing shorter posts).

Then you consider three hypotheses: that (as near as possible to) exactly 25% more prefer long than prefer short, that (as near as possible to) exactly 25% more prefer short, and that the numbers preferring long and preferring short are equal.

And you establish that your posterior probability for the first of those is much bigger than for either of the others, and say

Our simple analysis led us to an actionable conclusion: there’s a 97% chance that the preference gap in favor longer posts is closer to 25% than to 0%, so I shouldn’t hesitate to write longer posts.

Everything before the last step is fine (though, as you do remark explicitly, it would be better to consider a continuous range of hypotheses about the preference gap). But surely the last step is just wrong in at least two ways.

  • You can't get from "preference gap of exactly 25% is much more likely than preference gap of exactly 0%" to "preference gap of at least 12.5% is much more likely than preference gap of at most 12.5%".
  • The original question wasn't whether the preference gap is at least 12.5%, it was whether it's at least 25%.

With any reasonable prior, I think the data you have make it extremely unlikely that the preference gap is at least 25%.

[EDITED to add:] Oh, one other thing I meant to say but forgot (which, unlike the above, hasn't already been said in comments on your blog). The assumption being made here is, roughly, that people responding to the survey are a uniform random sample from all your readers. But I bet they aren't. In particular, I bet more "engaged" readers are (1) more likely to respond to the survey and (2) more likely to prefer longer meatier posts. So I bet the real preference gap among your whole readership is smaller than the one found in the survey. Of course you may actually prefer to optimize for the experience of your more engaged readers, but again that isn't what you said you wanted to do :-).

Comment author: lifelonglearner 23 January 2017 12:20:31AM 0 points [-]

What about talking to yourself?

Does that count here too?

I've found that doing silly things like opening a text document and asking myself questions and then replying to them has been therapeutic for small things on multiple occasions.

Comment author: Gunnar_Zarncke 22 January 2017 11:43:35PM 0 points [-]

Does talking help?

It is said that talking helps.

Whether it does is an often posed question e.g. here or here - except when it doesn't.

When therapists do it, it's called talk therapy.

Personally I have noticed that the chance of problems going away or otherwise being solved after talking about it seems to increase. This perception could due to any number of biases of course. Anyway I'd like to ask for your opinion on when and whether it helps.

What do you think?

Talking does help most people

Not at all Almost always

Talking does help me

Not at all Almost always

Talking helps more with small everyday problems than with big/traumatic/existential problems

Agree Exactly the opposite

How often do you talk about your problems?

Vote help:

  • On the last question: If you think that it doesn't depend on the size you should choose the middle option.
  • For the graded questions: If you just want to see the results please choose the middle option.
Submitting...

Comment author: whpearson 22 January 2017 11:37:21PM 0 points [-]

Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about.

Rationalists aren't about rationality? Back in 2007 I don't think there was a split. Maybe we need to rename rationalists if "rationality is winning" is entrenched.

LWperson: I'm a rationalist, I really care about AIrisk.

PersonWhohasReadSomeRationalityStuff: So you will lie to get whatever you want, why should I think AIrisk is as important as you say and give you money?

LWPerson: Sigh...

Rationality-the-mental-art is, I think, about "making optimal plays" at whatever you're doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win).

I consider every mental or computational action a "play" because it uses energy and can have a material impact on someones goals. So being more precise in your thinking or modelling is also a 'play' even before you make a play in the actual game.

Evolution doesn't really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.

I think you missed my point about evolution.

Your version of rationality sounds a lot like fitness in evolution. We don't not what it is but it is whatever it is that survives (wins). So if we look at evolution and the goal is survival, lots of creatures manage to survive while not having great modelling capability. This is because modelling is hard and expensive.

Fitness is also not a shared art. Ants telling birds how to be "fit" would not be a productive conversation.

I've run out of time again. I shall try and respond to the rest of your post later.

Comment author: Qiaochu_Yuan 22 January 2017 11:30:26PM 1 point [-]

Sure. What is different about the situation with 0.999...? How do you know that that is a sensible name for a real number?

Comment author: Gunnar_Zarncke 22 January 2017 11:26:33PM *  0 points [-]

Comments go here

Comment author: Thomas 22 January 2017 11:10:00PM 0 points [-]

I am saying you cannot write ...9990 - the decimal point, then an infinite number of 9s and then the last zero!

Okay, perhaps you can in some other axiomatic system. But not for the ordinary real numbers.

Comment author: NatashaRostova 22 January 2017 11:03:44PM 1 point [-]

This is pretty cool. It reminds me of an article I read on brain surgery recently (https://www.nytimes.com/2016/01/03/magazine/karl-ove-knausgaard-on-the-terrible-beauty-of-brain-surgery.html). Where the surgeon keeps the patient awake, and zaps different parts of the brain to see what they map to. They don't even try to pretend they understand the system, but try to map simple correlations.

Comment author: Thomas 22 January 2017 10:59:15PM *  0 points [-]

you aren't taking seriously the hypothetical world in which 1 - 0.999... isn't zero

In this (math) world it is zero only because for every nonzero positive epsilon, you can pick a FINITE number of 9s, such that 1-0.999999...99999 (a FINITE number of 9s) is already SMALLER than that epsilon.

For EVERY real number greater than zero, you have a FINITE number of 9s, such that this difference is smaller.

Therefore the difference cannot by a number greater then 0.

Comment author: Thomas 22 January 2017 10:52:44PM 1 point [-]

An infinite number of them. Then after the last zero,

There is no "after the last" zero.

Comment author: Elo 22 January 2017 09:04:03PM 0 points [-]

Agree with most of your edits.

Comment author: jamesf 22 January 2017 08:43:04PM 0 points [-]

Some of the weird suns are into postrationality, as I would define it, but most of them aren't. (That, or, they keep their affiliation with postrationality secret, which is plausible enough given their commitment to opsec.)

I would add The Timeless Way of Building to the list of primary texts, Chistopher Alexander has been a huge influence for many of us.

Comment author: CellBioGuy 22 January 2017 08:31:42PM *  1 point [-]

My internet presence and my IRL presence among my friends has fallen to about zero as I am doing a final push to graduate with my PhD in cell biology and genomics. On a job interview right now for a position studying something I am passionate about for real. Thesis being written (and Latex being learned) for, hopefully, a defense at the end of March.

Its remarkable how much data I have when I actually dig everything up from the last 3 years and lay it out side by side.

Comment author: Qiaochu_Yuan 22 January 2017 08:30:50PM 1 point [-]

I'm sympathetic to this concern (it's why I don't like the QM sequence and think thinking about many-worlds is mostly a waste of time), but I also think math has the potential to be a useful toy environment in which to practice good epistemic habits (as suggested by shev's recent litmus test posts), especially around confusing paradoxes and the like. Many of the complications of reasoning about the real world, like disagreement about complicated empirical facts, are gone, but a few, like the difficulty of telling whether you've made an unjustified assumption, remain.

Comment author: The_Jaded_One 22 January 2017 08:23:15PM 2 points [-]

I don't think lots of math is a good direction to take the site. And I say this as a person with a mathematics degree.

I think mathematics is a bit too much of a fun distraction for us nerds from the hard problem of "refining the art of human rationality".

Comment author: The_Jaded_One 22 January 2017 08:13:17PM *  0 points [-]

but their position relies on physical facts about the world, along with a narrow definition of the correction of value event. To combat that, we'd need to define the operator properly

I feel like this is a very generic problem with safety.

If you can't specify to an algorithm what part of the real world you're talking about (at least to a reasonably good approximation) then it is very hard to make progress.

Perhaps the simplest way to specify "who the operator is" is to define the abstract notion of creation-of-subagents, and tell the AI that the "operator" is the agent that created it.

The abstract notion of creation-of-subagents seems quite robust once you have a system that can represent and reason about that notion; it certainly seems like if you created an agent which, from the get-go understood and cared about "the agent that created it", you would rule out entire classes of paperclipper-like AIs.

Comment author: arundelo 22 January 2017 07:40:21PM 1 point [-]

If I may, let me agree with you in dialogue form:

Alice: 1 = 0.999...
Bob: No, they're different.
Alice: Okay, if they're different then why do you get zero if you subtract one from the other?
Bob: You don't, you get 0.000...0001.
Alice: How many zeros are there?
Bob: An infinite number of them. Then after the last zero, there's a one.

Alice is right (as far as real numbers go) but at this point in the discussion she has not yet proved her case; she needs to argue to Bob that he shouldn't use the concept "the last thing in an infinite sequence" (or that if he does use it he needs to define it more rigorously).

Comment author: lifelonglearner 22 January 2017 07:06:38PM *  1 point [-]

Melting Asphalt has this very intriguing analysis of personhood.

Comment author: Qiaochu_Yuan 22 January 2017 06:46:10PM 1 point [-]

Again, that's assuming the conclusion; what if 1 - 0.999... weren't zero, and you picked that as epsilon? You're skipping steps. It's worth writing down exactly what you think is happening more carefully.

(To be clear, I'm not claiming that you've asserted any false statements, but I think there's an important sense in which you aren't taking seriously the hypothetical world in which 1 - 0.999... isn't zero, and what that world might look like. There's something to learn from doing this, I think.)

Comment author: Qiaochu_Yuan 22 January 2017 06:38:35PM 1 point [-]

I think a reasonable position is "I personally do not know how to make sense of this notation," but are you claiming that "nobody knows how to make sense of this notation"? Would you be willing to make a bet to that effect, and at what odds, for how much money?

Comment author: ChristianKl 22 January 2017 06:32:24PM 0 points [-]

I am not saying that it is a wrong or worthless idea, just that comparing "having this 'one weird trick' and applying it to everything" with the whole body of knowledge and attitudes is a type error.

It seems like you are making that error. I'm not seeing anybody else making it.

There's no reason to assume that the word postrational is only about Kegan's ideas. The most in depth post that tried to define the term (https://yearlycider.wordpress.com/2014/09/19/postrationality-table-of-contents/) didn't even speak of Kegan directly.

Calling the stage 5 a tool or "weird trick" also misses the point. It's not an idea in that class.

Comment author: steven0461 22 January 2017 06:24:44PM 2 points [-]

Random opinions on hot-button political issues are off-topic, valueless, and harmful; please take them elsewhere.

Comment author: steven0461 22 January 2017 06:15:13PM 1 point [-]

"How do you get a clean sewer system if you insist on separating it from the rest of the city?"

Comment author: steven0461 22 January 2017 06:13:09PM 1 point [-]

I don't think LW is, in fact, capable of talking about politics rationally; if it did, it wouldn't have much influence; and trying will harm its core interests through divisiveness, distraction, drawing bad users, and further reputational damage.

Comment author: steven0461 22 January 2017 06:04:32PM 1 point [-]

Many, arguably most, of the consequences of downvotes don't show up in the immediate term. Habits and expectations take time to change, posters choose whether or not to leave altogether, and so on.

Comment author: steven0461 22 January 2017 06:03:00PM 3 points [-]

The difference between having 50% bad content and 30% bad content isn't just the 20% of bad content; it's also the contributions from all those who would keep visiting if they anticipated a 30% chance of seeing bad content but would not keep visiting if they anticipated a 50% chance of seeing bad content.

Comment author: 9eB1 22 January 2017 05:50:34PM 2 points [-]

The Meaningness book's section on Meaningness and Time is all about culture viewed through Chapman's lens. Ribbonfarm has tons of articles about culture, most of which I haven't read. I haven't been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.

Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider's perspective, emphasizing the value of "local" or "small-set" ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don't identify as a group to the extent that they want to have "post-rationalist rituals." David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.

Honestly, I don't think there is much in the way of fundamental philosophical differences. I think it's more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it's just my impression.

Comment author: lifelonglearner 22 January 2017 04:57:23PM 0 points [-]

Thanks for putting this together!

I'm unsure how much info we want to put on the LW home page (I'm leaning towards less stuff is better). Are there good repositories / intro pages where we could put the rest of the info?

Also, made a few edits / comments for readability and flow on the doc.

Comment author: JacobLiechty 22 January 2017 11:54:06AM *  0 points [-]

It's been argued (I think even by Hanson) that the best way to be rational is to not be a rationalist. There's some truth to that cheek that I think is well-captured by metarationalism.

Metarationality, to the extent that I've been affected by it personally, is what permits me to live and think in a way that ends up being outcome-maximizing, capable, and powerful, without doing much of what is recognizably rationalist. In that sense, rationalism is subsumed by metarationality. But in another sense, knowing this doesn't make me not a rationalist, which means metarationality is also subsumed by rationality.

One might say this means that the two terms are equivalent, except that metarationality also permits me to float back and forth between these two seemingly contradictory senses in a structured way. This may be a Bad Thing, but I can only say for myself, as somebody who has at some point fully understood all the core tenets of rationalism, that this more recent attitude has worked out well in my actual, real life.

Comment author: Kaj_Sotala 22 January 2017 11:44:05AM 2 points [-]

Do you feel the "link post ugh"?

Submitting...

Comment author: JacobLiechty 22 January 2017 11:36:46AM *  0 points [-]

I agree intuitively on all the points about the aesthetics of postrational writing. Do you have some good examples of post-rationalists writing about culture? Also I'm curious about their focus on rituals. I haven't seen much actual writing about it and I certainly haven't seen any of it in practice (assuming it's more than theoretical).

I've been toying with the idea that there are discoverable fundamental philosophical differences that end up explaining the differences in style from the inside. From the outside, obviously there are discoverable causes. But to the extent that all personality types and institutions admit different ontologies and philosophies, there may be some philosophical, mathematical, or AI-theoretic way that metarationalists would argue for metarationalism fundamentally.

Comment author: JacobLiechty 22 January 2017 11:18:04AM 0 points [-]

There's an important, if hard to explain (to the Less Wrong audience) reason why I included the Weird Suns above. Twitter #postrationalism (or whatever term you want) is thrashing about in the haphazard but structured manner I think required for any metarationalist stance. Chapman is about as safe as this thrashing comes, and at least has the decency to write it down in a way that somebody can understand. He at least explicitly tells you he's trying to break your mind before doing it. To me this is called "competent writing."

If the weird suns don't want to be included in metarationality, they are totally free to define (or not define) themselves otherwise. In the meantime, if the signals and thoughts running through their physical brains are the same kind as other humans (because the suns are, in fact, humans with Twitter accounts) then more structured metarationality is probably going to accidentally have similar thoughts and for similar reasons.

Also, I'd say that the kind of white-knighting or contrarianism in your original comment that led to all this hubbaloo is probably best suited for Twitter. Less Wrong can stay Less Wrong. Have at me.

Comment author: metatroll 22 January 2017 11:02:49AM 1 point [-]

I was white-knighting for the weird suns. See, I haven't read Chapman. I just assumed you were here to steal their work by annexing it to your own philosophical brand of "meta-rationality". I didn't know that was one of his buzzwords.

Of course you have a right to be a Chapmanite; even that version of postrational subculture is surely better than the subrational postculture that surrounds it. But do not imagine for a moment that his is the only way to go meta.

Comment author: Thomas 22 January 2017 10:52:20AM 0 points [-]

...9990

This has no sense, really.

Comment author: shev 22 January 2017 10:10:23AM *  2 points [-]

Interleaving isn't really the right way of getting consistent results for summations. Formal methods like Cesaro Summation are the better way of doing things, and give the result 1/2 for that series. There's a pretty good overview on this wiki article about summing 1-2+3-4.. .

Comment author: D_Alex 22 January 2017 09:57:32AM *  2 points [-]

In the "proof" presented, the series 1-1+1... is "shown" to equal to 1/2 by a particular choice of interleaving of the values in the series. But with other methods of interleaving, the sum can be made to "equal" 0, 1 1/3 or indeed AFAICT any rational number between 0 and 1.

So... why is the particular interleaving that gives 1/2 as the answer "correct"?

Comment author: Thomas 22 January 2017 09:01:11AM 1 point [-]

No, it does not!

Whatever epsilon you might choose, you can easily take enough 9s (nines) after the 0. - to have the difference smaller than this epsilon of yours.

Comment author: Qiaochu_Yuan 22 January 2017 08:54:29AM *  0 points [-]

This argument more or less assumes its conclusion; after all, if it weren't the case that 1 - 0.999... were zero, then it would be some positive number x, so you could pick epsilon = x.

Comment author: drethelin 22 January 2017 08:17:44AM 4 points [-]

This is why we need downvotes.

Comment author: waveman 22 January 2017 07:58:24AM *  1 point [-]

Yes. Still this is the concept of limits and it is a significant step for most people. I think the most common first reaction is "Huh?".

But people will make the effort if you explain this is a solution to the mysteriousness of "infinitesimals".

Comment author: Osthanes 22 January 2017 07:50:02AM 0 points [-]

So I tend to think that 'agency' is too broad a category to aim at, but it seems to me that it's still useful to pursue becoming more powerful, however that presents itself to you. I mean, if not the terms of power, agency, capability, what terms do you use to benchmark and think about personal growth? Are things like Stage 5 a end unto themselves?

I've had something of the same experience when it comes to feeling like I don't think like others, but I wonder how many people think they do think like the standard rationalist. Does post-rationality change how you live, in terms of actual behavior?

Comment author: 9eB1 22 January 2017 07:29:13AM *  4 points [-]

My observation is that post-rationalists are much more interested in culture and community type stuff than the Rationalist community. This is not to say that Rationalist community doesn't value culture and community, and in fact it gets discussed quite frequently (e.g. the solstice has been established explicitly to create a sense of community and "the divine"). The difference is that while Rationalists are most interested in biases, epistemology, and decision theory, post-rationalists are most interested in culture and community and related things. Mainstream Rationalists are usually/sometimes, loosely tied to Rationalism as a culture (otherwise the solstice wouldn't exist), but mostly they define their interests as whatever wins and the intellectual search for right action. Post-rationalists, on the other hand, view the world through a lens where culture and community is highly important, which is why they think that Rationalists even represents a thing you can be "post" to, while many Rationalists don't see it that way.

I don't think that Rationalists are wrong when they write about culture, they usually have well-argued points that point to true things. The main difference is that post-rationalists have a sort of richness to their descriptions and understanding that is lacking in Rationalist accounts. When Rationalists write about culture it has an ineffable dryness that doesn't ring true to my experience, while post-rationalists don't. The main exception to this is Scott Alexander, but in most other cases I think the rule holds.

Ultimately, I don't think there is much difference between the quality of insights offered by Rationalists and post-rationalists, and I don't think one is more right than the other. When reading the debates between Chapman and various Rationalist writers, the differences seem fairly minute. But there is a big difference in the sorts of things they write about. For myself, I find both views interesting and so far have not noticed any significant actual conflict in models.

Edit: Another related difference is that post-rationality authors are more willing to go out on a limb with ideas. Most of their ideas, dealing in softer areas, are necessarily less certain. It's not even clear that certainty can be established with some of their ideas, or whether they are just helpful models for thinking about the world. In the Rationalsphere people prefer arguments that are clearly backed up at every stage, ideally with peer reviewed evidence. This severely limits the kind of arguments you can make, since there are many things that we don't have research on and will plausibly never have research on.

Comment author: Thomas 22 January 2017 07:12:48AM 1 point [-]

For every epsilon greater than zero, the difference 1-0.99999999... is even smaller. Smaller than any positive number.

Then, if it's not negative, then it's zero. This difference is zero.

This is the most correct way to put it, I believe.

Comment author: JacobLiechty 22 January 2017 07:06:12AM 0 points [-]

if I were in the position of metatroll, I would take it as a perceived smugness

Ah, you're right! That response seems pretty likely, especially given the conversational norms I'm less familiar with around here. De-escalation is a difficult but useful art.

Comment author: JacobLiechty 22 January 2017 07:01:52AM *  1 point [-]

I'm considering buckling down and coordinating a host of existing abstract criticisms/expansions of Rationalism into a series of posts explaining Kegan and Metarationality, roughly as described in Meaningness but in more Less Wrong style which more directly argues from Sequence-level "first priniciples."

I'm a little wary of the worthwhileness of this project, and I suspect many on Less Wrong are ambivalent about Kegan and Chapman, displaying a kind of vague annoyance that Rationalist principles are being challenged versus any sort of excitement that those principles could be built upon in any manner distinguishable from traditional methods. The main sentiment seems to be that "Anything that successfully builds on rationality is automatically now a part of rationality."

I understand the ambivalence, I think. There should be a very high bar for "things that are successful critiques" of rationality. I'd love to inquire how much interest there would be in a high quality version of this project that is extremely self-aware of any criticisms, in order to assess whether to put in the due diligence to ensure it is of that quality. Chapman's entire blog is one thing, but a treatment that brings systems-level insights to his abstract statements could be quite revealing of what is actually meant by them.

In any case, I've started by compiling many of the source materials that would be useful to such a project in a Discussion Post.

Comment author: Elo 22 January 2017 06:58:07AM 1 point [-]

the tone of my response was meant to defuse whatever tension was the cause of the accusatory tone in the first place.

Absolutely, I see that in the sense of the idea of, "leave things unsaid" (from that specific culture). if I were in the position of metatroll, I would take it as a perceived smugness (leading downhill into more smugness in response), not in the lighthearted "don't talk about the elephant in the room" kind of way that you intended it.

Metatroll started it, you played with it instead of either letting it go or responding to it directly. I contributed by ignoring it. Do continue to hang around and share your ideas with us.

Comment author: NatashaRostova 22 January 2017 06:50:22AM 1 point [-]

[Note, after rereading your post my comment is tangential]

I have always been empathetic to the argument, from people first presented with this, that they are different. Understanding how math deals with infinity basically requires having the mathematical structure supporting it already known. I'm not particularly gifted at math, but the first 4 weeks of real analysis really changed the way I think, because it was basically a condensed rapid upload of centuries of collaborative work from some of the smartest men to ever exist right into my brain.

Otherwise, at least in my experience, we operate in a discrete world that moves through time. So, what I predict is happening, is that when you ask that question to people their best approximation is a discrete world ticking through time.

Is 0.999...=1? Well, each tick of time another set of [0.0...9]'s is added, when the question is finally answered the time stops. You're then left with some finite number [0.0..01]. In their mind it's a discrete algo running through time.

The reality that it's a limit that operates absent of time, instantaneously, is hard to grasp, because it took brilliant men centuries to figure out this profoundly unintuitive result. We understand it because we learned it.

Comment author: JacobLiechty 22 January 2017 06:27:48AM *  0 points [-]

I believe you mean to say that you don't like the tone of the original post in that it feels "accusatory".

Indeed! I'll consider making my responses more direct, dispassionate statements of fact, as I believe this is the traditional Less Wrong style that encourages root-level understanding of conversational protocols. In certain etiquettes, the tone of my response was meant to defuse whatever tension was the cause of the accusatory tone in the first place. It implicitly acknowledges that there was tension but doesn't directly call it out, preventing the costly need to pick apart what happened while giving metatroll a chance to immediately back off and coordinate in the future.

Frankly coming back to Less Wrong after being away for a while leaves me a little weary at how the quality of discourse may have changed over time, which is why I appreciate attempts to maintain a welcoming tone.

Comment author: CronoDAS 22 January 2017 06:19:14AM 2 points [-]

When I encountered this result in school for the first time, in the context of learning the algorithm for converting a repeating decimal into a fraction, I eventually reasoned "If 1 and 0.999... are different numbers, there ought to be a number between them, but there isn't. So it must really be true that they're the same."

Comment author: Elo 22 January 2017 06:18:32AM 1 point [-]

I've never heard of metatroll either, but I won't hold that against them :)

I believe you mean to say that you don't like the tone of the original post in that it feels "accusatory". I realise I failed to make it clear that I picked up on that when I made my response. I agree it comes across as accusatory. Especially as 3 levels leading to "I have never heard of you". I am glad you said:

I won't hold that against them :)

Keeping a welcoming community is very important to us.

Comment author: shev 22 January 2017 06:16:38AM *  0 points [-]

I know about Cesaro and Abel summation and vaguely understand analytic continuation and regularization techniques for deriving results from divergent series. And.. I strongly disagree with that last sentence. As, well, explained with this post, I think statements like "1+2+3+...=-1/12" are criminally deceptive.

Valid statements that eliminate the confusion are things like "1+2+3...=-1/12+O(infinity)", or "analytic_continuation(1+2+3+)=-1/12", or "1#2#3=-1/12", where # is a different operation that implies "addition with analytic continuation", or "1+2+3 # -1/12", where # is like = but implies analytic continuation. Or, for other series, "1-2+3-4... #1/4" where # means "equality with Abel summation".

The massive abuse of notation in "1+2+3..=-1/12" combined with mathematicians telling the public "oh yeah isn't that crazy but it's totally true" basically amounts to gaslighting everyone about what arithmetic does and should be strongly discouraged.

Comment author: JacobLiechty 22 January 2017 06:06:42AM 1 point [-]

in a welcome thread

It was on the recent "Month 1 Less Wrong Revival" from Raemon. You've reminded me to post a Welcome hello here!

I've never heard of metatroll either, but I won't hold that against them :)

Comment author: JacobLiechty 22 January 2017 06:04:01AM *  4 points [-]

Was reminded to say hello here!

I'm Jacob Liechty, with a new account after using a less active pseudonym for a while. I've been somewhat active around the rationality community and know a bunch of people therein and throughout. Rationalism and its writings had a pretty deep impact on my life about 5 years ago, and I haven't been able to shake it since.

I currently make video games for a living, but will be keeping my finger to the pulse to determine when to move into more general tech startups, some sort of full time philanthropy, maybe start an EA nonprofit or metacharity, or who knows. I'm one of the creators of a game called Astroneer, which has been doing quite successfully, which opens up a lot of opportunities but also gives me some responsibilities of managing it well for the purposes of giving.

Comment author: Elo 22 January 2017 05:33:12AM 1 point [-]

I've never heard of you.

Jacob recently renamed from a pseudonym. He mentioned it in a welcome thread (I think or an open thread)

I've never heard of Keganism

Robert Kegan's work in the stages of developmental psychology is definitely a concept that hangs around and is debated. (good summary here https://meaningness.wordpress.com/2015/10/12/developing-ethical-social-and-cognitive-competence/)

I've never heard of your "primary texts of metarationality"

I know of these blogs but never make much sense of them myself. I figured it was personal preference of writing styles.

Comment author: CronoDAS 22 January 2017 05:31:29AM 1 point [-]

I actually know a bit about summing series, so I recognize the proof as completely bogus but the actual sum as probably correct, for a certain sense of "sum". You can make a divergent series add up to anything at all by grouping and rearranging terms. On the other hand, there actually are techniques for finding the sum of a convergent series that sometimes don't give nonsensical answers when you try to use them to find the "sum" of a divergent series, and in this sense the sum of 1 + 2 + 3 + etc. actually can be said to equal -1/12.

Comment author: lifelonglearner 22 January 2017 05:02:48AM 1 point [-]

Hm, thanks for your thoughts on the matter. I've noticed too, that once I get a thing to be "not too terrible", then it feels less like I have to work on it. But then I'll just prioritize other things over it.

Comment author: metatroll 22 January 2017 04:55:50AM 1 point [-]

I've never heard of Keganism, I've never heard of your "primary texts of metarationality", and I've never heard of you.

View more: Next