Mod notice: There's a discussion going on in the Bay Area rationality community involving multiple users of LW that includes allegations of serious misconduct. We don't think LW is a good venue to discuss the issue or conduct investigations, but we think it's important for the safety and health of the LW community that we host links to a summary of findings once the discussion has concluded. If you'd like to discuss this policy, please send a private message to me and I'll talk it over with the mod team. [Comments on this comment are disabled.]
I've been thinking a lot lately about where I want to live long-term. I'm currently in Madison Wi, which is really nice, but kinda small and has an unfortunately hot/humid summer. Financially I can live pretty much anywhere I want, except maybe Monaco.
Things I want, not in order of importance:
1. A nice house. In an ideal world, the house would house several of my closest friends, be walkable to parks, shops, and restaurants, and be close enough to other friends that they drop by regularly. I am also very interested in running a public space or a semi-public space adjacent or close to the house, possibly a makerspace, possibly a cafe, or something else. This is one of the reasons it's not instantly obvious that I should move to Berkeley or Manhattan or something. I'm financially well-off but there's like, an order of magnitude in difference in cost of having a nice big place to live. On the other hand, I'm also pretty flexible about living in apartment or something, but for the long term I much prefer having a space I own and can modify and build up to become better and better over the years.
2. People. My best friend and one of my partners lives in Ma...
A major consideration / uncertainty here seems to be "is a hub in Madison something remotely practical?", and you might want to specifically test that with something kickstarter-esque (i.e "I will try this if and only if at least X people commit to moving here if and only if X other people commit to moving here, etc")
(Testing this unfortunately is fair bit of work, but relatively small compared to the work involved in the actual project, so maybe also a good test of "can drethelin pull this off?")
I think it would be benefitial to always link the last open thread in a new open thread in the main text.
Folk values -- the qualities of the "I love science" crowd as contrasted to the qualities of actual, exceptional scientists -- matter too. The common folk outnumber the epic heroes.
This holds true even if you believe that everyone can become an epic hero! People need to know, rather than guess and hope, that walking the path to becoming an epic hero might look and feel rather different than doing active epic heroing. In theory one ought to be able to derive the appropriate instrumental goals from the terminal goal, but in practice people very frequently mess this up.
The general crowd has a different job than the inner circle, and treating this difference as orthogonal propagates fewer errors than treating it as a matter of degree.
Folk rationality needs to strongly protect against infohazards until one gets a chance to develop less vulnerable internal habits. Folk rationality needs to celebrate successfully satisficing goals and identifying picas rather than going for hard optimization because amateur min-maxing just spawns Goodhart demons every which way. Folk rationality needs to prize keeping social commitments and good conflict mediation tools; it needs to honor social...
The EU seems to get rid of the habit of changing the clocks around twice a year, in an exercise of listening to public feedback.
He said that the decision was taken after a vast majority of EU citizens — primarily from Germany — who took part in a survey on the issue called for an end to biannual clock changes.
Massive support for halting daylight saving time
Over 80 percent of respondents supported abolishing changing the clocks in summer and winter in a survey that ran between July 4 and August 16, according to media reports on the results.
It's interesting that the EU seems to be able to coordinate currently on an issue like this where the right answer is more or less obvious but where the coordination problem is massive.
Do we have other similar problems with obvious answers that are just a matter of getting enough people coordinated?
The main argument in favor of changing time zones that it supposedly saves energy doesn't seem to be true these days.
Two examples of issues: People seem to work 16 minutes less the Monday after daylight savings. It also increases heart attacks.
I remember reading on LessWrong about a study a while back that compared trained psychologists to lay-people and found that the trained psychologists didn't do any better. Does anybody know the study or LessWrong post?
Eliezer made this attempt at naming a large number computable by a small Turing machine. What I'm wondering is exactly what axioms we need to use in order to prove that this Turning machine does indeed halt. The description of the Turing machine uses a large cardinal axiom ("there exists an I0 rank-into-rank cardinal"), but I don't think that assuming this cardinal is enough to prove that the machine halts. Is it enough to assume that this axiom is consistent? Or is something stronger needed?
I used to be quite good at math at high school, but I haven't studied it afterwards. This seems like a good opportunity to ask: Which book(s) should I read in order to fully understand that post?
Assume great knowledge of high-school math, but almost nothing beyond that. I want to get from there to... understanding the cardinals and ordinals. I have a vague impression of what they likely are, but I'd like to have a solid foundation, i.e. to know the definitions and to understand the proofs (in ideal case, to be able to prove some things independently).
Bonus points if the books you mention are available at Library Genesis. ;)
As well as ordinals and cardinals, Eliezer's construction also needs concepts from the areas of computability and formal logic. A good book to get introduced to these areas is Boolos' "Computability and Logic".
Two good first books on set theory (with a similar scope) are
(Though they might be insufficient to parse the post.)
Keep in mind that set theory has a very different character from most math, so it might be better to turn to something else first if "studying math" is more of a motivation.
I'm a bit confused by the rationalWiki. Is that maintained by anyone? I saw page for EY, and it seemed either be genuinely harsh/scathing/dismissive, or a poorly executed inside joke.
RationalWiki is maintained by people who really dislike Less Wrong in general and Eliezer personally.
My own view is that RationalWiki is a terrible, terrible source for anything.
RationalWiki is older than LW, and their definition of "rationality" is quite different from the one used here.
To put it simply, their "rationality" means science as taught at universities + politically correct ideas; and their "irrationality" means pseudoscience + religion + politically incorrect ideas + everything that feels weird (such as many-world hypothesis).
Also, their idea of rational discussion is that after you have decided that something is irrational, you should describe it in a snarky way, and feel free to exaggerate if it makes the page more funny. So when later anyone points out a factual error in your article, you can defend it by saying "it was obviously a joke, moron".
In my understanding, this is how they most likely got obsessed with Eliezer and LessWrong:
1) How does a high-school dropout dare to write a series of articles about quantum physics? Only university professors are allowed to have opinions on such topic. Obviously, he must be a crackpot. And he even identifies as a libertarian, which makes him a perfect target for RationalWiki: attack pseudoscience and right-wing politics in the same strike!
2) Oops, a debate at S...
It's worth noting that David Gerard did contribute a lot on LessWrong in it's early days as well, so he's not really someone who's simply an outsider.
Slightly better than the last time I saw it.
Still, the "Neoreaction" section is 3x longer than the "Effective Altruism" section. Does anyone other than David Gerard believe this impartially describes Less Wrong? (And where are the sections for the other political alignments mentioned in LW surveys? Oh, we are cherry picking, of course.)
No mention of the Sequences, other than "seed material to create the community blog". I guess truly no one reads them anymore. :(
I guess truly no one reads them anymore. :(
Not true!
ReadTheSequences.com has gotten a steady ~20k–25k monthly page views (edit: excluding bots/crawlers, of course!) for 11 months and counting now, and I am aware of a half-dozen rationality reading groups around the world which are doing Sequence readings (and that’s just those using my site).
(And that doesn’t, of course, count people who are reading the Sequences via LW/GW, or by downloading and reading the e-book.)
If Jehovah Witnesses come to my door, I spend a few minutes talking with them, and then ask them to leave and never return, will I also get a subsection "Jehovah Witnesses" on Wikipedia? I wouldn't consider that okay even if the subsection contained words "then Viliam told them to go away". Like, why mention it at all, if that's not what I am about?
I suppose if there was a longer article about LW, I wouldn't mind spending a sentence or two on NR. It's just that in current version, the mention is disproportionately long -- and it has its own subsection to make it even more salient. Compare with how much place the Sequences get; actually, not mentioned at all. But there is a whole paragraph about the purpose of Less Wrong. One paragraph about everything LW is about, and one paragraph mentioning that NR was here. Fair and balanced.
I mostly agree, except for:
I don't think anyone is going to be persuaded by the WP article that LW is full of neoreactionaries, and if someone who has that impression reads the article they might even be persuaded that they're wrong.
I believe this is not how most people think. The default human mode is thinking in associations. Most people will read the article and remember that LW is associated with something weird right-wing. Especially when "neoreaction" is a section header, which makes it hard to miss. The details about who took interest in whom, if they notice them at all, will be quickly forgotten. (Just like when you publicly debunk some myths, it can actually make people believe them more, because they will later remember they heard it, and forget it was in the context of debunking.)
If the article would instead have a section called "politics on LW" mentioning the 'politics is the mindkiller' slogan, and how Eliezer is a libertarian, and then complete results of a political poll (including the NR)... most people would not remember that NR was mentioned there.
Similarly, the length of sections is instinctively perceived as a degree of...
RW's priors are in the right place, at least.
I fully agree (about the priors on QM). The problem is somewhere else. I see two major flaws:
First, the "rationality" of RW lacks self-reflection. They sternly judge others, but consider themselves flawless. To explain what I mean, imagine that I would know nothing about QM other than the fact that 99% of online writings about QM are crackpottery; and then I would find an article about QM that sounds weird. -- Would I trust the article? No. That's what the priors are for. Would I write my own article denouncing the author of the other article as a crackpot? No. Because I would be aware that I know nothing about QM, and that despite the 99% probability of crackpottery, there is also the 1% probability it is correct; and that my lack of knowledge does not allow me to update after reading the article itself, so I am stuck with my priors. I would try to leave writing the denunciation to someone who actually understands the topic; to someone who can say "X is wrong, because it is actually Y", instead of merely "X is wrong, because, uhm, my priors" or even "X is wrong, trust me, I am the expert"...
I think given that we seem to have settled on Open Threads being stickied, we can get rid of the first bullet point.
How do people organize their long ongoing research projects (academic or otherwise)? I do a lot of these but think I would benefit from more of a system than I have right now.
Just finished reading Yuval Noah Harari's new book 21 Lessons for the 21st Century. Primary reaction: even if you already know all the things being presented in the book, it is worth a read just because of the clarity into the discussion the book offers.
This article seems to have some bearing on decision theory, but I don't know enough about it or quantum mechanics to say what that bearing might be.
I'd be interested to know others' take on the article.
Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters: