Hi, I’m a new user who stumbled across this so I figured it would be worth commenting. I came here via effective altruism and have now read a decent chunk of the Sequences so LW is not totally new to me as of reading this but still.
I definitely wish this introduction had been here when I first decided to take a look at LessWrong - it was a little confusing to figure out what the community was even supposed to be. The introductory paragraph is excellent for communicating what the core is that the community is built around, and the following sections seem super efficient at getting us up to speed on what LW is.
I find the How To Get Started section very confusing. It seems at first like a list of things you need to do before participating on the forum, but I guess it’s supposed to be rough progression of things you can do to become more of a LessWronger considering it has attend a meet-up on there? The paragraph afterwards also doesn’t make any sense to me - it says there’s not a tonne but you’ll probably be missing something on your first day… but it seems to me like it IS a tonne (the Sequences alone are really long!) and on your first day you won’t have done ANY of them (except general reading). Maybe you meant to say that it’s a list of possible things to get yourself clued up, but you don’t need to do a ton of it?
Finally, I already commented with no idea it would be moderated so heavily, so including that info is definitely helpful - plus the information about standards of content is just generally super useful to know from the start anyway.
Overall this seems really good and gets the important questions answered quickly. Honestly there’s not anything I wish was there that isn’t, or anything that is there that seems unnecessary. Great work 👍
Well luckily this question gave me enough karma to upvote your answer :) thanks!
Weird because the comments I made didn’t receive any votes but it seems like I stopped being able to vote after writing them. Unless this requirement was added in the last few weeks which would explain the change.
Ethics is (infuriatingly) unique in this aspect.
Discussion of beliefs that do not make observable predictions is unproductive (Making Beliefs Pay Rent), and discussion of beliefs that do not make ANY predictions about ANYTHING EVER is literally meaningless (the different versions of reality are not meaningfully distinguishable).
That said… ethics poses an exception to this rule, because although ethical beliefs don’t make predictions (for anything ever), they still have implications for how you should behave. This is entirely unique to ethical beliefs.
As much as I’d love to do away with the infinite rambling debates over predictionless beliefs, ethics stands in the way. They are beliefs that pay rent not in the currency of predictions to be used to achieve your goals, but in the form of the very goals themselves - an offer so irresistible to instrumental rationalists such as myself, that we will trample far past our ordinary epistemic boundaries to grasp at it.
The point Daniel makes about morality - that your actions if you don’t believe in moral truths should be the same as those if you do - IS relevant to people who care about INSTRUMENTAL epistemic rationality (the irrelevance of this matter is relevant if you get what I mean)
“Mistakenly equivocating” is not quite fair. It’s plainly obvious that he meant “wrong” in the moral sense, considering he literally opened with “if there are no ethical truths…”. (Plus, I’m taking “assume” to mean “act as though” rather than “believe”, which also solves your point of disagreement)
I think the argument that explanations for the blue tentacle are bad because they wouldn’t predict the blue tentacle is flawed.
The theory that you are neither hallucinating nor is there some greater intelligent power places far far lower likelihood on waking up with a blue tentacle than the theory that there is either a greater intelligent power or you are hallucinating, even though both are obscenely low. What matters is the likelihood ratio, so waking up with a blue tentacle is strong evidence that there is either a greater intelligent power or you are hallucinating. In the event that I woke up with a blue tentacle, I would adjust my beliefs and expectations accordingly, mainly in the form of having vaguer expectations, and not ruling out “impossible” things so much.
A hypothesis can accumulate a large probability just by all other hypotheses losing probability, even if the hypothesis is not predictive.
Let’s say you flip a coin a very large number of times are get a sequence with roughly equal numbers of heads and tails. If asked to explain how you got that sequence, the explanation that the coin is equally likely to land heads or tails is not bad just because you wouldn’t have expected that particular outcome. It’s by far the most likely hypothesis just because all the others place even lower likelihood on the outcome.
You call the supernatural explanations of the tentacle disguised ignorance, but it’s actually accurate ignorance. Anyone who refuses the supernatural after waking up with a blue tentacle would continue to find themselves shocked as things they deem impossible continue to occur. Yes those who accept the supernatural would be shocked too, but many orders of magnitude less so. (Take “the supernatural” here to mean the hypothesis that there is either some kind of intelligence controlling things or you are hallucinating)