A system for generating ungrounded but mostly true beliefs would be an oracle, as impossible as a perpetual motion machine.
(McKay & Dennett 2009)
Ah, I think you misunderstood me (on reflection, I wasn't very clear) — I'm doing an experiment, not a research project in the sense of looking over the existing literature.
(For the record, I decided on conducting something along the lines of the studies mentioned in this post to look at how distraction influences retention of false information.)
If you want to live in a nicer world, you need good, unbiased science to tell you about the actual wellsprings of human behavior. You do not need a viewpoint that sounds comforting but is wrong, because that could lead you to create ineffective interventions. The question is not what sounds good to us but what actually causes humans to do the things they do.
Douglas Kenrick
The idea that destroying the environment will make the remaining species "better" by making sure that only the "fittest" survive betrays a near-total misunderstanding of evolution. Evolution is just the name we give to the fact that organisms (or, more precisely, genes) which survive and reproduce effectively in a given set of conditions become more frequent over time. If you clear-cut the forest, you're not eliminating "weak" species and making room for the "strong" — you're getting rid of species that were well-adapted to the forest and increasing the numbers of whatever organisms can survive in the resulting waste.
I think that if you understand how evolution works on a really intuitive level — how blind it is — it's very difficult to believe both in human evolution and a guiding divinity. "Genes which promote their own replication become more common over time" is not a principle which admits of purpose. Vaguer understandings of evolution's actual mechanism probably contribute to the apparent reasonableness of "theistic evolution".
This one really annoys me. It's one of the very few posts of Eliezer's that I've ever downvoted, because it strikes me as both naive and foolish. And I think that's because what Eliezer's proposing here is to pretend that your map is the territory. To take your third-hand model of history (no doubt deeply flawed and horrendously incomplete) and treat it as if it were your actual experience. Not to mention that you just don't have the knowledge he suggests envisioning (how do you know what it actually feels like to change your mind about slavery?) — or the ...
Also, it occurs to me that this is essentially an application of Bayes' Theorem. In an ordinary survey, the posterior probability (killed leopard|says yes) is 1, which is bad for the farmers, so they lie and therefore decrease the conditional probability (says yes|killed leopard), which is bad for the surveyors. Adding the die roll increases the unconditional probability of saying yes, so that the posterior probability no longer equals the conditional, and they can both get what they want.
The keywords here are "randomized response". There are some interesting variations (from the Wikipedia page):
The sensitive question is worded in two dichotomous alternatives, and chance decides, unknown to the interviewer, which one is to be answered honestly.
Alternative 1: "I have consumed marijuana." Alternative 2: "I have never consumed marijuana." The interviewed are asked to secretly throw a die and answer the first question only if they throw a 6, otherwise the second question.
I was very surprised to see that too, to the point of questioning whether the result was real, but apparently it is. (The particular result is on page 10 — and possibly elsewhere, I haven't read it through yet.)
I found the Coming of Age series to be both self-indulgent and quite dull, and I think that it's very difficult to use yourself as an example of vice or virtue without running into one or both of those issues. I also find that I (more-or-less automatically) downgrade an author's ethos by a lot when he's talking about himself as an illustrative example. But for this one, it's the skeeviness factor that dominates — it's just plain creepy to hear about your love life as a source of telling anecdotes. And that's distracting.
Polyamory may be great, but the righ...
Downvoted.
It's interesting and potentially useful, and I liked some of the links; however, I felt seriously skeeved-out throughout, probably due to the combination of uncomfortably personal authorial bildungsroman (with connotations of "if you do this right, you can be just like me"), and the implied promotion of polyamory. Would work much better if you could remove the autobiographical aspects.
I felt skeeved as well. I didn't mind the polyamory plugs, and in general I like autobiographical bits, as they bring more of a human element into posts.
What bothered me was that the discussion about romance felt very cold, somehow. Talking about "suboptimal" relationships, saying that you "scored" your first one-night stand, and such. It sounded like you weren't interested in other people as, well, people.
The interesting thing is that I don't really endorse these emotional reactions to your writing. In general, I'm completely fine with...
This one is really important, for a reason that wasn't spelled out in the original article — hindsight bias makes people think folk wisdom is more valid than it really is, and thereby opens the door to all kinds of superstitious belief. If people interpret scientific evidence as confirming 'common-sense' or handed-down knowledge (because they select the 'common-sense' belief that turned out to be true after seeing the data, rather than having to select one from the morass beforehand), then they're likely to increase their credence in other knowledge of tha...
The Atlantic put up a piece today using HP:MoR as the take-off point for discussing fanfiction and fan communities.
To be precise, knowing that someone is biased towards holding a belief decreases the amount you should update your own beliefs in response to theirs — because it decreases the likelihood ratio of the test.
(That is, having a bias towards a belief means people are more likely to believe it when it isn't true (more false positives), so a bias-influenced belief is less likely to be true and therefore weaker evidence. In Bayesian terms, bias increases P(B) without increasing P(B|A), so it decreases P(A|B).)
So CarmendeMacedo's right that you can't get evidence a...
Sometimes, apparently rational self-interested strategies turn out (as in the prisoners' dilemma) to be self-defeating. This may look like a defeat for rationality, but it is not. Rationality is saved by its own open-endedness. If a strategy of following accepted rules of rationality is sometimes self-defeating, this is not the end. We revise the rules to take account of this, so producing a higher-order rationality strategy. This in turn may fail, but again we go up a level. At whatever level we fail, there is always the process of standing back and going up a further level.
Quoted in The Blank Slate by Steven Pinker
It was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods,—‘Aye,' asked he again, ‘but where are they painted that were drowned after their vows?' And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happens much oftener, neglect and pass them by.
Francis Bacon
As a somewhat casual reader and participant, my immediate reaction (regardless of functionality, which I really haven't tried out yet) is that the new design is horrendously ugly compared to the old one. I was intending to go through the Sequences soon, but the visual change is a pretty strong disincentive.
If at all possible, I'd like the ability to view posts using the old interface.
Since being introduced to Less Wrong and clarifying that 'truth' is a property of beliefs corresponding to how accurately they let you predict the world, I've separated 'validity' from 'truth'.
The syllogism "All cups are green; Socrates is a cup; therefore Socrates is green" is valid within the standard system of logic, but it doesn't correspond to anything meaningful. But the reason that we view logic as more than a curiosity is that we can use logic and true premises to reach true conclusions. Logic is useful because it produces true beliefs. ...
I find that Less Wrong is a conflation of about six topics:
These don't all seem to fit together entirely comfortably. Ideally, I'd split these into thre...
Do you do it?
No.
You would be harming another human being without expecting any benefit from doing so. Punishment is only justified when it prevents more harm than it causes, and this is specified not to be the case.
Our sense that people 'deserve' to be punished is often adaptive, in that it prevents further wrongdoing, but in this case it is purely negative.
Minor error: judging from context, I think you mean the Milgram Experiment, which focuses on obedience to authority, and not the Stanford Prison Experiment, which is about how social roles affect personalities.
I'm currently working on a summary of some of the central Less Wrong ideas, with links to the original Sequences posts. First paragraph of the (very) rough draft, currently sans links:
...The purpose of beliefs is to correspond with the state of the world and therefore allow you to predict reality. The 'truth' of a belief is therefore how accurately it predicts the world, which means that there can be degrees of truth, not simply a right and wrong answer. The way to arrive at beliefs which predict the world (at true beliefs) is to base your beliefs on eviden
That's pretty much Deism, I think. Not right, but not quite as wrong as some other possible approaches.
Welcome! I don't know much/how systematically you've read, but if you're wondering about what makes something "true", you'll want to check out The Simple Truth (short answer: if it corresponds to reality), followed by Making Beliefs Pay Rent and What is Evidence.
But it sounds like you've made a very good start.
Very much, thank you. Your feedback has been a great help.
Given that others arrived at some of these conclusions before me, I can see why there would be disapproval -- though I can hardly feel disappointed to have independently discovered the same answers. I think I'll research the various models more thoroughly, refine my wording (I agree with you that using the term 'deontology' was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
Your article is an excellent one, and makes many of the same points I tried to make here.
Specifically,
...in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason.
is the same idea I was trying to express with the 'cheating student' example, and then generalized in the final part of the post, and likewise the idea of Parfitian-filtered decision theory seems to be essentially the same as ...
Your objection and its evident support by the community is noted, and therefore I have deleted the post. I will read further on the decision theory and its implications, as that seems to be a likely cause of error.
However, I have read the meta-ethics sequence, and some of Eliezer's other posts on morality, and found them unsatisfactory -- they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it.
On the point of complexity of value, I was attempting...
Okay. I'm not going to post my writeup since it's a little outdated — close to two years old now — and contains a lot of info irrelevant to this discussion, but the gist:
I tried out piracetam very actively (using it frequently, varying a lot of things, and closely noting the effects) for about two months in summer 2012, and have been using it periodically since then. I didn't notice any long-term effects, though I don't think I've actually ever tried to test the effects of a fixed long-term regimen.
What I did find was very dramatic acute effects, starting... (read more)