Comment author: 21 January 2017 03:26:28AM *  14 points [-]

I agree that a careful thinker confronted with this puzzle for the first time should eventually conclude that the crux is what exactly the expression "0.999..." actually means. At this point, if you don't know enough math to give a rigorous definition, I think a reasonable response is "I thought I knew what it meant to have an infinite number of 9s after the decimal point, but maybe I don't, and absent me actually learning the requisite math to make sense of that expression I'm just going to be agnostic about its value."

Here's an argument in favor of doing that. Consider the following proof, nearly identical to the one you present. Let's consider the number x = ...999; in other words, now we have infinitely many 9s to the left of the decimal point. What is this number? Well,

10x = ...9990

x - 10x = 9

-9x = 9

x = -1.

There are a couple of reasonable responses you could have to this argument. Two of them require knowing some math: one is enough math to explain why the expression ...999 describes the limit of a sequence of numbers that has no limit, and one is knowing even more math than that, so you can explain in what sense it does have a limit (the details here resemble the details of 1 + 2 + 3 + ... but are technically easier). I think in the absence of the requisite math knowledge, seeing this argument side by side with the original one makes a pretty strong case for "stay agnostic about whether this notation is meaningful."

And on the third hand, I can't resist saying one more thing about infinite sequences of decimals to the left. Consider the following sequence of computations:

5^2 = 25

25^2 = 625

625^2 = 390625

0625^2 = 390625

90625^2 = 8212890625

890625^2 = 793212890625

It sure looks like there is an infinite decimal going to the left, x, with the property that x^2 = x, and which ends ...890625. Do you agree? Can you find, say, 6 more of its digits, assuming it exists? What's up with that? Is there another x with this property? (Please don't spoil the answer if you know what's going on here without some kind of spoiler warning or e.g. rot13.)

Comment author: 23 January 2017 06:47:54PM *  1 point [-]

Let's consider the number x = ...999; in other words, now we have infinitely many 9s to the left of the decimal point.

My gut response (I can't reasonably claim to know math above basic algebra) is:

• Infinite sequences of numbers to the right of the decimal point are in some circumstances an artifact of the base. In base 3, 1/3 is 0.1 and 1/10 is 0.00220022..., but 1/10 "isn't" an infinitely repeating decimal and 1/3 "is" -- in base 10, which is what we're used to. So, heuristically, we should expect that some infinitely repeating representations of numbers are equal to some representations that aren't infinitely repeating.

• If 0.999... and 1 are different numbers, there's nothing between 0.999... and 1, which doesn't jive with my intuitive understanding of what numbers are.

• The integers don't run on a computer processor. Positive integers can't wrap around to negative integers. Adding a positive integer to a positive integer will always give a positive integer.

• 0.999... is 0.9 + 0.09 + 0.009 etc, whereas ...999.0 is 9 + 90 + 900 etc. They must both be positive integers.

• There is no finite number larger than ...999.0. A finite number must have a finite number of digits, so you can compute ...999.0 to that many digits and one more. So there's nothing 'between' ...999.0 and infinity.

• Infinity is not the same thing as negative one.

All I have to do to accept that 0.999... is the same thing 1 is accept that some numbers can be represented in multiple ways. If I don't accept this, I have to reject the premise that two numbers with nothing 'between' them are equal -- that is, if 0.999... != 1, it's not the case that for any x and y where x != y, x is either greater than or less than y.

But if I accept that ...999.0 is equal to -1, I have to accept that adding together some positive numbers can give a negative number, and if I reject it, I just have to say that multiplying an infinite number by ten doesn't make sense. (This feels like it's wrong but I don't know why.)

Comment author: 19 January 2017 03:51:36PM *  0 points [-]

If someone wins the Nobel prize you heard it here first.

The is-ought problem implies that the universe is deterministic, which is incorrect, it's an infinite range of possibilities or probabilities which are consistent but can never be certain. Humes beliefs about is-ought came from his own understanding of his emotions and those around him's emotions. He correctly presumed that it is what drives us and that logic and rationality could not (thus not ought to be in any way because things are) and thought the universe is deterministic (without the knowledge of the brain and QM). The insight he's not aware of that even though his emotions are the driving factor, he misses out that he can emotionally be with rationality and logic, facts, so there is no ought to be from what is. 'What is' implies facts, rationality, and logic and so on, EA/Utilitarian ideas. The question about free will is an emotional one if you are aware your subjective reference frame, awareness, was a part of it then you can let go of that.

Comment author: 23 January 2017 06:31:52PM 0 points [-]

The is-ought problem implies that the universe is deterministic

What?

Comment author: 18 January 2017 02:37:02PM *  0 points [-]

that's at least on the right side of the is-ought gap.

I'm having a hard time understanding what you mean.

Accepting facts fully is EA/Utilitarian ideas. There is no 'ought' to be. 'leads' was the incorrect word choice.

Comment author: 19 January 2017 07:47:52AM 1 point [-]

No. Accepting facts fully does not lead to utilitarian ideas. This has been a solved problem since Hume, FFS.

Comment author: 16 January 2017 08:23:45PM *  0 points [-]

How disappointing. No one on LW appears to want to discuss this. Except for a few who undoubtedly misunderstood this post and started raving about some irrelevant topics. At least let me know why you don't want to.

1) How would we go about changing human behavior to be more aligned with reality?

Aligned with reality = Accepting facts fully (probably leads to EA ideas, science, etc)

2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

Comment author: 18 January 2017 02:09:28PM 0 points [-]

Accepting facts fully (probably leads to EA ideas,

It's more likely to lead to Islam; that's at least on the right side of the is-ought gap.

Comment author: 15 January 2017 05:43:35PM 0 points [-]

Ooh there's a cool idea, I hadn't thought of that.

Another angle is the possibility that vastly-improved directly-implanted translators - a babelfish, basically - might make the whole thing moot. You learn your first language and then have absolutely no need, ever, to learn another. Language could be more or less frozen wherever it stands at the time. That's if the technology is universally available - things get even more interesting if it was only available to the wealthy, or to citizens of wealthy nations.

Comment author: 15 January 2017 09:56:23PM 0 points [-]

Language could be more or less frozen wherever it stands at the time.

No it wouldn't -- language is for signaling, not only communication. There would probably be a common language for business and travel, but languages would continue to develop normally, since people would still want to use language to determine how they present themselves.

Comment author: 09 January 2017 05:29:08PM *  2 points [-]

(Unfortunately) the actual rationalist-who-wins is the one who goes about his ambitions like a good Slytherin and never publicly states his beliefs.

I think this is mostly true, though there are a few problems with "Slytherin Rationality".

it was irrational for Aaronson to open his mouth about Feminism

Suppose modern elevator-gate-y feminism operates a bit like a mafia protection racket: they (the feminists) cream off status and money for themselves by propagating a set of ideas that are clearly ridiculous, but they keep everyone in line by threatening to doxx and shame and generally destroy the reputation of anyone who challenges them. A small group of Rebecca Watsons could dominate a much larger group of Slytherin Rationalists if all the Slytherins aren't prepared to take even a small risk to stand up for what they believe in.

talking about identitarian-adjacent topics like genetic modification without first carefully preparing the ground for discussion is going to risky.

If you never talk about the things that you actually care about, you will never manage to find people who you want to be close friends and allies with.

You can "prepare the ground" to some extent, but really what that means is that you take the slow route to unfriending the person rather than the fast route. You want to hang around in your free time with someone who you have to constantly filter yourself around and construct elaborate lies for? I didn't think so....

Preparing the ground is probably best used on someone who you see as a means to something, for example you want to extract favors from them, get money or other contacts from them, etc.

Comment author: 14 January 2017 10:40:07PM 1 point [-]

If you never publicly state your beliefs, how are you supposed to refine them?

But if you do publicly state your beliefs, the Rebecca Watsons can eat you, and if you don't, the Rebecca Watsons can coordinate against you.

How do you solve that?

"I believe that it's always important to exchange views with people, no matter what their perspectives are. I think that we have a lot of problems in our society and we need to be finding ways to talk to people, we need to find ways to talk to people where not everything is completely transparent. ... I think often you have the best conversations in smaller groups where not everything is being monitored. That's how you have very honest conversations and how you can think better about the future." -- Thiel on Bilderberg

Comment author: 13 January 2017 09:35:24AM 1 point [-]

This was not the content of an article I expected to be written by the mind behind Brexit.

Why? Rationalists are more likely to embrace weird or counter intuitive positions supported by chains of reasoning. I don't mean this as a bad thing. I would think the probability of a rationalist being behind a weird and unconventional position is higher than baseline.

Comment author: 14 January 2017 10:15:47PM 2 points [-]

Right, and he addresses this in the article:

This lack of motivation is connected to another important psychology – the willingness to fail conventionally. Most people in politics are, whether they know it or not, much more comfortable with failing conventionally than risking the social stigma of behaving unconventionally. They did not mind losing so much as being embarrassed, as standing out from the crowd. (The same phenomenon explains why the vast majority of active fund management destroys wealth and nobody learns from this fact repeated every year.)

We plebs can draw a distinction between belief and action, but political operatives like him can't. For "failing conventionally", read "supporting the elite consensus".

Now, 'rationalists', at least in the LW sense (as opposed to the broader sense of Kahneman et al.), have a vague sense that this is true, although I'm not sure if it's been elaborated on yet. "People are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing" (e.g. "political actors are more interested in going through the conventional symbolic motions of working out which side they ought to be on than in actually working it out") is widespread enough in the community that it's been blamed for the failure of MetaMed. (Reading that post, it sounds to me like it failed because it didn't have enough sales/marketing talent, but that's beside the point.)

Something worth noting: the alternate take on this is that, while most people are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing, conventional symbolic motions are still usually good enough. Sometimes they aren't, but usually they are -- which allows the Burkean reading that the conventional symbolic motions have actually been selected for effectiveness to an extent that may surprise the typical LW reader.

It should also be pointed out that, while we praise people or institutions that behave unconventionally to try to win when it works (e.g. Eliezer promoting AI safety by writing Harry Potter fanfiction, the Trump campaign), we don't really blame people or institutions that behave conventionally and lose. So going through the motions could be modeled purely by calculation of risk, at least in the political case: if you win, you win, but if you support an insurgency and lose, that's a much bigger deal than if you support the consensus and lose -- at least for the right definition of 'consensus'. But that can't be a complete account of it, because MetaMed.

Comment author: 17 December 2016 10:27:51PM 6 points [-]

I've written a bit about this, but I never finished the sequence and don't really endorse any of it as practical. Some of the comment threads may have useful suggestions in them, though.

Discussion quality is a function of the discussants more than the software.

I think we are better off using something as close to off-the-shelf as possible, modified only via intended configuration hooks. Software development isn't LW's comparative advantage. If we are determined to do it anyway, we should do it in such a way that it's useful to more than just us, so as to potentially get contributions from elsewhere.

What's the replacement plan? Are we building something from the ground up, re-forking Reddit, or something else? I've nosed around contributing a few times and keep getting put off by the current crawling horror. If we're re-building from something clean, I might reconsider.

Comment author: 18 December 2016 01:20:20PM 1 point [-]

Discussion quality is a function of the discussants more than the software.

But daydreaming about the cool new social media software we're totally going to write is so fun!

In response to comment by on Circles of discussion
Comment author: 16 December 2016 02:32:45PM 8 points [-]

This is seeking a technological solution to a social problem.

It is still strange to me that people say this as if it were a criticism.

In response to comment by on Circles of discussion
Comment author: 18 December 2016 01:17:37PM 1 point [-]

People have been building communities with canons since the compilation of the Torah.

LW, running on the same Reddit fork it's on today, used to be a functional community with a canon. Then... well, then what? Interesting content moved offsite, probably because 1) people get less nervous about posting to Tumblr or Twitter than posting an article to LW 2) LW has content restrictions that elsewhere doesn't. So people stopped paying attention to the site, so the community fragmented, the barrier to entry was lowered, and now the public face of rationalists is Weird Sun Twitter and Russian MRAs from 4chan who spend their days telling people to kill themselves on Tumblr. Oops!

(And SSC, which is a more active community than LW despite running on even worse software.)

Comment author: 16 December 2016 01:49:54PM 3 points [-]

This is seeking a technological solution to a social problem.

The proposed technological solution is interesting, complicated, and unlikely to ever be implemented. It's not hard to see why the sorts of people who read LW want to talk about interesting and complicated things, especially interesting and complicated things that don't require much boring stuff like research -- but I highly doubt that anyone is going to sit down and do the work of implementing it or anything like it, and in the event that anyone ever does, it'll likely take so long that many of the people who'd otherwise use LW or its replacement will lose interest in the interim, and it'll likely be so confusing that many more people are turned off by the interface and never bother to participate.

If we want interesting, complicated questions that don't require a whole lot of research, here's one: what exactly is LW trying to do? Once this question has been answered, we can go out and research similar groups, find out which ones accomplished their goals (or goals similar to ours, etc.) and which ones didn't, and try to determine the factors that separate successful groups from failed ones.

If we want uninteresting, uncomplicated questions that are likely to help us achieve our goals, here's one: do we have any managers in the audience? People with successful business experience, muaybe in change management or something of that nature? I'm nowhere near old or experienced enough to nominate myself, or even to name the most relevant subdomains of management with any confidence, but I've still seen a lot of projects that failed due to nonmanagers' false assumption that management is trivial, and a few projects in the exact same domain that succeeded due to bringing in one single competent manager.

As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff.

There's already a lot of stuff from the post-LW fragmentation that a great many people have read. How about identifying and compiling that? And since many of these things will be spread out across Tumblr/Twitter/IRC/etc. exchanges rather than written up in one single post, we could seed the LW revival with explanations of them. This would also give us something more interesting and worthwhile to talk about than what sort of technological solution we'd like to see for the social problem that LW can't find anything more interesting and worthwhile to talk about than what sort of technological solution we'd like to see for the social problem that LW can't find anything interesting or worthwhile enough to get people posting here.

View more: Next