Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.
Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.
I just want to say thanks for all of the minor UI improvements over the past few weeks. Whoever helped is most appreciated.
So much of this emerging art seems to about how to get ourselves to actually update our beliefs and actions to evidence, rather than just going around in circles just doing what we're used to doing. In the quantum physics sequence, Eliezer told a story about aliens called the Ebborians, who "think faster than humans {and} are less susceptible to habit; they recompute what we would cache." Following the discussion of individual differences on the path towards rationality, I wonder: would an Ebborian art of rationality be about how to update less,...
I notice that there doesn't seem to be any way to have a Less Wrong profile page, or at least a link that people can click to learn more about you. As it is, no matter what you post on Less Wrong, it's going to get buried eventually.
Excellent diagrams-next-to-equations explanation of Bayes’ Theorem (except that I think diagrams with rectangles would be more visually accurate).
Suppose that I live on a holodeck but don't know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.
Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected...
Not my only objection, but:
so the probability that it will win is more like 10^-3
If the hololords smiled upon you, why would they even need you to buy a lottery ticket? How improbable is it that they not only want to help you, but they want to help you in this very specific way and in no other obvious way?
I know this is a blog - and not a journal - but if post authors could start with a summary of your main point, it might help. Like an abstract. Say what you are going to say, then say it, then summarise.
I tend to give up on posts if I don't see that the topic is one which is of interest to me in the first paragraph.
I thought about replying to this anecdote about sleep with an anecdote of my own, but decided that it'll only add noise to the discussion. At the same time, I caught a heuristic that I would possibly follow had I not caught it, and that I now recall I did follow on many occasions: if someone else already replied, I'd join in.
It's a variety of conformity bias, and it sounds dangerous: turns out that sometimes, only two people agreeing with each other is enough to change my decision, independently on whether them agreeing with each other gives any evidence t...
The dearth of posts in the past couple of days has led me to wonder whether there is a correlation between the length of time a post spends as the most recent post (or most recent promoted post), and the magnitude of its score. Or rather, I expect there to be such a correlation, but I wonder how strong it is.
I just lost about 50 karma in the space of about an hour, with pretty much all of my most recent comments being voted down. I recall others mentioning they've had similar experiences, and was wondering how widespread this sort of thing actually is. Does this happen to others often? I can imagine circumstances under which it could legitimately occur, but it seems a bit odd, to say the least.
Do people have any thoughts on how to fund research rationally?
Charities? Futarchical government?
So far we have the following buttons below a comment:
What would you think of another button
which would take you to a page that listed all the comments that had a permalink to that comment?
There's a lot of permalinking to past comments, but hardly any linking to future comments. I think this is just something that hasn't evolved yet -- once it did, we would find it very natural.
Would it be difficult to implement? It would involve keeping a library of all permalinks used in comments, and updating the...
To make a good decision, it's not necessary to be a good thinker. If you're wise enough to defer to someone who is a good thinker, that also works. And if you're wise enough to defer to someone who is wise enough to defer to someone (repeat N times) who is a good thinker, that also works. That suggests to me the hopeful thought that in a population of agents with varying rationality, a small change can cause a phase transition where the system goes from a very incompetent agent making the decisions to a very competent agent making the decisions. One might ...
Meta: Given the rate at which new comments appear, I wish the comments feed http://lesswrong.com/comments/.rss contained more than 20 entries; say closer to 200. Also, all of the feeds I've looked at (front page, all new, comments) have the identical title "lesswrong: What's new", which is useless for distinguishing them.
I was poking around the older posts and found So you say you're an altruist.... Without intending to open the whole discussion up again, I want to talk about the first example.
...So please now imagine yourself to be in an ancient country which is ruled over by an evil king who has absolute power of life or death over all his subjects—including yourself. Now this king is very bored, and so for his amusement he picks 10 of his subjects, men, women, and children, at random as well as an eleventh man who is separate from the rest. Now the king gives the eleven
Re. repeated requests for some LW whipping-boy other than religion: How about (Platonic) realism?
It may be more popular than religion; and it may be hard to find a religion that doesn't require at least moral realism; but people will get less worked-up over attacks on their metaphysics than attacks on their religion.
You may wish to exempt morals and the integers.
Having recently received a couple of Amazon gift certificates, I'm looking for recommendations of 'rationalist' books to buy. (It's a little difficult to separate the wheat from the chaff.)
I'm looking mainly for non-fiction that would be helpful on the road to rationality. Anything from general introductory type texts to more technical or math oriented stuff. I found this OB thread which has some recommendations, but I thought that:
What do people here (esp. libertarians) think about (inter)net neutrality?
Seems to me that net neutrality is a great boon to spammers and porn downloaders. People might not like it so much if they discovered that, without net neutrality, they could pay an extra dollar a month and increase their download speed while browsing by a factor of ten.
Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.
Topics that have appeared in recent posts are strictly forbidden upon pain of 3^^^3 dust specks in your eye.
In the process of reading and thinking and processing all of this new information I keep having questions about random topics. Generally speaking, I start analyzing the questions and either find a suitable answer or a more relevant keystone question. I have often thought about turning these into posts and submitting them for the populace but I really have no idea if the subjects have been broached or solved or whatever. What should I do?
An example would be a continuing the train of thought started in Rationalistic Losing. The next question in that line ...
The ability to solve epistemic problems where motor actions are inhibited - and all the agent can do is indicate the correct answer - seems like an important subset of problems a rational agent can face - if only because this skill is less complicated to learn - and is relatively easy to test.
I believe this skill is usually referred to as "reasoning". Maybe we should be discussing this subject more than we do.
This is not very topical, but does anyone want to help me come up with a new term paper topic for my course on self-knowledge? My original one got shot down when it turned out that what I was really trying to defend was a hidden assumption that is unrelated to self-knowledge. Any interesting view I can defend or attack on the subject of introspection, reflection, self-awareness, etc. etc. has potential. Recommended reading is appreciated.
Seeking comments on Info-Gap Decision Theory:
http://www.stat.columbia.edu/~cook/movabletype/archives/2009/04/what_is_info-ga.html
Oops, I "reported" when I meant to "reply". (Someone was talking to me and I clicked 'yes'.) What action need I take to undo?
Seems like a sensible thing would be to report this as well, and the two will cancel out. However, I can't report myself... can you report this, indicating you have done so with a single downvote, and then I will delete this comment.
Thanks
JGWeissman writes, "I don't see what you gain by this strategy that justifies the decrease in correlation between a comments displayed karma score and the value the community assigns it that occurs when you down vote a comment not because it is a problem, but because the author had written other comments that are a problem."
Vladimir Nesov writes, "If you are downvoting indiscriminately, not separating the better comments from the worse ones, without even bothering to understand them, you are abusing the system."
Anna writes, "This has the following advantages over blanket user-downvoting: . . . It does not impair quality-indicators on the user's other comments"
The objection is valid. I retract my proposal and will say so in an addendum to my original comment.
The problem with my proposal is the part where the voter goes to a commenter's lesswrong.com/user/ page and votes down 20 or 30 or so comments in a row. That dilutes or cancels out useful information, namely, votes from those who used the system the way it was intended.
If there were a way for a voter to reduce the karma of a person without reducing the point-score of any substantive comment, then my proposal might still have value, but without that, my proposal will have a destructive effect on the community, so of course I withdraw my proposal.