Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Yesterday, someone moved one of my posts from Main to Discussion without telling me. Again.
I encourage the site administrators to show some basic courtesy to the posters who provide the content for the site. I believe this would be a better way of doing things:
1. Have a policy on what has to happen to move a post from Main to Discussion. Who can do it? How many admins are there who can do this? State this policy in a FAQ.
2. When you move a post from Main to Discussion, make a comment on the post saying you have done so and why you have done so.
If you've been following the announced partnership between LessWrong and Castify, you'll know that we would like to start offering the promoted posts as a podcast.
So far, everything offered by Castify is authored by Eliezer Yudkowsky who gave permission to have his content used. Because promoted posts can be written by those who haven't explicitly given us permission, we're reluctant to offer them without first working through the licensing issues with the community.
What we propose is that all content on the site be subject to the Creative Commons license which would allow content posted to LessWrong to be used for commercial use as long as the work is given proper attribution.
LessWrong management and Castify want feedback from the community before moving forward. Thoughts?
Edit: EricHerboso was kind enough to start a poll in the comments here.
I was excited to find this site, so I wanted to know how many people had joined LessWrong. Was it what it seemed - that a lot of people had actually gathered around the theme of rational thought - or was that just wishful thinking about a site that a guy with a neat idea and his buddies put together? I couldn't find anything stating the number of members on LessWrong anywhere on the site or the internet, so I decided it would be a fun test of my search engine knowledge to nail jello to a tree and make my own.
Some argue that Google totals are completely meaningless, however, the real problem is that it's very complicated and if you don't know how search engines work, your likelihood of getting a usable number is low. I took into account the potential pitfalls when MacGyvering this figure out of Google. So far, no one has posted a significant flaw with my specific method. (I will change that statement if they do, once I've read their comment.) Also, I was right (Find in page: total).
Here is the query I constructed:
site:lesswrong.com/user -"submitted by" -"comments by"
(Translation provided at the end.)
This gets a similar result in Bing and Yahoo:
If this is correct, LessWrong has over 9,000 members. That's my claim: "LessWrong probably has over 9,000 members" not "LessWrong has exactly 9,000 members". My LessWrong population figure is likely to be low. (I explain this below.)
Why did I do this? I was really overjoyed to find this site and wanted to see whether it was somebody's personal site with just a few buddies, or if they actually managed to draw a significant gathering of people who are interested in rational thought. I was very happy to see that it looks much bigger than a personal site. Since it was so hard to find out how many users LessWrong has, I decided to share.
I think a lot of people assume the hasty generalization that "all search engine totals are meaningless". If you're an average user just plugging in search terms with little understanding of how search engines work: yes, you should regard them as meaningless. However, if you know the limitations of a technique, what parts of the system your working within are consistent and what parts of it are not, I say it is possible to get some meaning within those limitations. Do I know all the limitations? Well, I assume I am unaware of things I don't know, so I won't say that. But I do know that so far nobody has proven this number or method wrong. If you want to prove me wrong, go for it. That would be fascinating. Remember that the claim is "LessWrong probably has over 9,000 members". The entire purpose of this was to get an "at least this many" figure for how many members LessWrong has. The inaccuracies I've already taken into consideration in order to compensate for the limits of this technique are listed below:
Why this is an "at least this many" figure, pitfalls I've avoided or addressed, and inaccuracies.
- Some users may not be included in Google's index yet. For instance, if they have never posted, there may be no link to their page (which is what I searched for - user pages), and the spider would not find them. This may be restricted to members that have actually commented, posted, or have been linked to in some way somewhere on the internet.
- Search engine caches are not in real time. There can be a lag of up to months, depending on how much the search engine "likes" the page.
- It has been reported by previous employees of a major search engine that they are using crazy old computer equipment to store their caches. I've been told that it is common for sections of cache to be down for that reason.
- Search engines have restrictions in place to conserve resources. For instance, they won't let you peruse all of the results using the "next" button, and they don't total all of the results that they have when you first press "search" (you may see that number increase later if you continue to press "next" to see more pages of results.)
- It has been argued that Google doesn't interpret search terms the way you'd think. I knew that before I started. The query was designed with that in mind. I explain that here: http://lesswrong.com/r/discussion/lw/e4j/number_of_members_on_lesswrong/780g
- Some of the results in Bing and Yahoo were irrelevant, though I think I weeded them pretty thoroughly for Google if my random samples of results pages are a good indication of the whole.
- When you go to your user page, if you have more than 10 comments, a next link shows at the bottom and clicking it makes more pages appear. My understanding is that Google doesn't index these types of links - and they don't seem to be getting included. http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/7839
Go ahead and check it out - stick the query in Google and see how many LessWrong members it shows. You'll certainly get a more up-to-date total than I have posted here. ;)
Translation for those of you that don't know Google's codes:
"Search only lesswrong.com, only the user directory."
(The user directory is where each user's home page is, so I'm essentially telling it "find all the home page directories".)
-"submitted by" -"comments by"
Exclude any page in that directory with the exact text "submitted by" or "comments by"
(The submissions and comments pages use a url in that directory, so they will show up in the results if I do not subtract them. Also, I used exact text specific to those pages, so that the text in the links on user home pages do not get user home pages omitted from the search. )
I realize this number isn't scientific proof of anything, (we can't see Google's code so that would be foolish), which is why I'm not attempting to use it to convince anyone of anything important.
So I have been checking laws around the world regarding Apostasy. And I have found extremely troubling data on the approach Muslims take to dealing with apostates. In most cases, publicly stating that you do not, in fact, love Big Brother (specifically, that you do not believe in God, the Prophet, or Islam), after having professed the Profession of Faith being adult and sane (otherwise, you were never a Muslim in the first place), will get you killed.
Yes, killed. It's one of the only three things traditional Islamic tribunals hand out death penalties for, the others being murder and adultery.
However, interestingly enough, you are often given three days of detainment to "think it over" and "accept the faith".
Some other countries, though, are more forgiving: you are allowed to be a public apostate. But you are still not allowed to proselytize: that remains a crime (in Morocco it's 15 years of prison, and a flogging). Though proselytism is also a crime if you are not a Muslim. I leave to your imagination how precarious the situation of religious minorities is, in this context.
How little sense all of this makes, from a theological perspective. Forcing someone to "accept the faith" at knife point? Forbidding you from arguing against the Lord's (reputedly) absolutely self-evident and miraculously beautiful Word?
No. These are the patterns of sedition and treason laws. The crime of the Apostate is not one against the Lord (He can take care of Himself, and He certainly can take care of the Apostate) but against the State (existence of a human lord contingent on political regime).
And the lesswronger asks himself: "How is that my concern? Please, get to the point." The point is that the promotion of rationalism faces a terrible obstacle there. We're not talking "God Hates You" placards, or getting fired from your job. We're talking fire range and electric chair.
"Sure," you say, "but rationalism is not about atheism." And you'd be right. It isn't. It's just a very likely conclusion for the rationalist mind to reach, and, also, our cult leader (:P) is a raging, bitter, passionate atheist. That is enough. If word spreads and authorities find out, just peddling HPMOR might get people jailed. And that's not accounting for the hypothetical (cough) case of a young adult reading the Sequences and getting all hotheaded about it and doing something stupid. Like trying to promote our brand of rationality in such hostile terrain.
So, let's take this hypothetical (harrumph) youth. They see irrationality around them, obvious and immense, they see the waste and the pain it causes. They'd like to do something about it. How would you advise them to go about it? Would you advise them to, in fact, do nothing at all?
More importantly, concerning Less Wrong itself, should we try to distance ourselves from atheism and anti-religiousness as such? Is this baggage too inconvenient, or is it too much a part of what we stand for?
I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.
Here's the link.
Guys I'd like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
The community seems to not update on ideas and concepts that didn't originate here. The only major examples fellow LWers brought up in conversation where works that Eliezer cited as great or influential. :/
Another thing, I could be wrong about this naturally, but it seems to clear that LessWrong has not grown. I'm not talking numerically. I can't put my finger to major progress done in the past 2 years. I have heard several other users express similar sentiments. To quote one user:
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general.
I've recently come to think this is probably true to the first approximation. I was checking out a blogroll and saw LessWrong listed as Eliezer's blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since the author doesn't make new updates any more. Originally the man had high hopes for the site. He wanted to build something that could keep going on its own, growing without him. It turned out to be a community mostly dedicated to studying the scrolls he left behind. We don't even seem to do a good job of getting others to read the scrolls.
Overall there seems to be little enthusiasm for actually systematically reading the old material. I'm going to share my take on what is I think a symptom of this. I was debating which title to pick for my first ever original content Main article (it was originally titled "On Conspiracy Theories") and made what at first felt like a joke but then took on a horrible ring of:
Over time the meaning of an article will tend to converge with the literal meaning of its title.
We like linking articles, and while people may read a link the first time, they don't tend to read it the second or third time they run across it. The phrase is eventually picked up and used out the appropriate of context. Something that was supposed to be shorthand for a nuanced argument starts to mean exactly what "it says". Well not exactly, people still recall it is a vague applause light. Which is actually worse.
I cited precisely "Politics is the Mindkiller" as an example of this. In the original article Eliezer basically argues that gratuitous politics, political thinking that isn't outweighed by its value to the art of rationality, is to be avoided. This soon came to meant it is forbidden to discuss politics in Main and Discussion articles, though it does live in the comment sections.
Now the question if LessWrong remains productive intellectually, is separate from the question of it being insular. But I feel both need to be discussed. If our community wasn't growing and it wasn't insular either, it could at least remain relevant.
This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.
Following http://lesswrong.com/lw/bwo/logical_fallacy_poster/ some people complained about
- the sarcastic illustration
- the lack of references
- the weird categorization that should rather fit a Bayesian framework
- the simplistic or even wrong definitions
- and more
Yet this poster has ONE key difference with the ideal poster, it exists.
If it sparks criticisms that lead to a new, LessWrong compatible poster, then it is well worth the critics.
The obvious next step then is to make a poster that would allow to take into account such well founded suggestion and synthesize the LessWrong lessons visually.
In your opinion then what would be a good structure, e.g. a hierarchy of fallacies, and a design theme?
I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.
As a real world example I would like to use the post 'Epistle to the New York Less Wrongians' by Eliezer Yudkowsky and his visit to New York.
How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?
It seems that the first thing he would have to do is to figure out what he really wants, his preferences1, right? The next step would be to formalize his preferences by describing it as a utility function and assign a certain number of utils2 to each member of the set, e.g. his own survival. This description would have to be precise enough to figure out what it would mean to maximize his utility function.
Now before he can continue he will first have to compute the expected utility of computing the expected utility of computing the expected utility of computing the expected utility3 ... and also compare it with alternative heuristics4.
He then has to figure out each and every possible action he might take, and study all of their logical implications, to learn about all possible world states he might achieve by those decisions, calculate the utility of each world state and the average utility of each action leading up to those various possible world states5.
To do so he has to figure out the probability of each world state. This further requires him to come up with a prior probability for each case and study all available data. For example, how likely it is to die in a plane crash, how long it would take to be cryonically suspended from where he is in case of a fatality, the crime rate and if aliens might abduct him (he might discount the last example, but then he would first have to figure out the right level of small probabilities that are considered too unlikely to be relevant for judgment and decision making).
I probably miss some technical details and got others wrong. But this shouldn't detract too much from my general request. Could you please explain how Less Wrong style rationality is to be applied practically? I would also be happy if you could point out some worked examples or suggest relevant literature. Thank you.
I also want to note that I am not the only one who doesn't know how to actually apply what is being discussed on Less Wrong in practice. From the comments:
You can’t believe in the implied invisible and remain even remotely sane. [...] (it) doesn’t just break down in some esoteric scenarios, but is utterly unworkable in the most basic situation. You can’t calculate shit, to put it bluntly.
None of these ideas are even remotely usable. The best you can do is to rely on fundamentally different methods and pretend they are really “approximations”. It’s complete handwaving.
Using high-level, explicit, reflective cognition is mostly useless, beyond the skill level of a decent programmer, physicist, or heck, someone who reads Cracked.
I can't help but agree.
P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.
1. What are "preferences" and how do you figure out what long-term goals are stable enough under real world influence to allow you to make time-consistent decisions?
2. How is utility grounded and how can it be consistently assigned to reflect your true preferences without having to rely on your intuition, i.e. pull a number out of thin air? Also, will the definition of utility keep changing as we make more observations? And how do you account for that possibility?
3. Where and how do you draw the line?
4. How do you account for model uncertainty?
5. Any finite list of actions maximizes infinitely many different quantities. So, how does utility become well-defined?
Can anyone tell me why it is that if I use my rationality exclusively to improve my conception of rationality I fall into an infinite recursion? EY say's this in The Twelve Virtues and in Something to Protect, but I don't know what his argument is. He goes as far as to say that you must subordinate rationality to a higher value.
I understand that by committing yourself to your rationality you lose out on the chance to notice if your conception of rationality is wrong. But what if I use the reliability of win that a given conception of rationality offers me as the only guide to how correct that conception is. I can test reliability of win by taking a bunch of different problems with known answers that I don't know, solving them using my current conception of rationality and solving them using the alternative conception of rationality I want to test, then checking the answers I arrived at with each conception against the right answers. I could also take a bunch of unsolved problems and attack them from both conceptions of rationality, and see which one I get the most solutions with. If I solve a set of problems with one, that isn't a subset of the set of problems I solved with the other, then I'll see if I can somehow take the union of the two conceptions. And, though I'm still not sure enough about this method to use it, I suppose I could also figure out the relative reliability of two conceptions by making general arguments about the structures of those conceptions; if one conception is "do that which the great teacher says" and the other is "do that which has maximal expected utility", I would probably not have to solve problems using both conceptions to see which one most reliably leads to win.
And what if my goal is to become as epistimically rational as possible. Then I would just be looking for the conception of rationality that leads to truth most reliably. Testing truth by predictive power.
And if being rational for its own sake just doesn't seem like its valuable enough to motivate me to do all the hard work it requires, let's assume that I really really care about picking the best conception of rationality I know of, much more than I care about my own life.
It seems to me that if this is how I do rationality for its own sake — always looking for the conception of goal-oriented rationality which leads to win most reliably, and the conception of epistemic rationality which leads to truth most reliably — then I'll always switch to any conception I find that is less mistaken than mine, and stick with mine when presented with a conception that is more mistaken, provided I am careful enough about my testing. And if that means I practice rationality for its own sake, so what? I practice music for its own sake too. I don't think that's the only or best reason to pursue rationality, certainly some other good and common reasons are if you wanna figure something out or win. And when I do eventually find something I wanna win or figure out that no one else has (no shortage of those), if I can't, I'll know that my current conception isn't good enough. I'll be able to correct my conception by winning or figuring it out, and then thinking about what was missing from my view of rationality that wouldn't let me do that before. But that wouldn't mean that I care more about winning or figuring some special fact than I do about being as rational as possible; it would just mean that I consider my ability to solve problems a judge of my rationality.
I don't understand what I loose out on if I pursue the Art for its own sake in the way described above. If you do know of something I would loose out on, or if you know Yudkowsky's original argument showing the infinite recursion when you motivate yourself to be rational by your love of rationality, then please comment and help me out. Thanks ahead of time.
After just spending some time browsing free nonficton kindle ebooks on Amazon, it occurred to me that it might be a good idea for SIAI/LW to publish for free download through Amazon some introductory LW essays and other useful introductory works like Twelve Virtues of Rationality and The Simple Truth.
People who search for 'rationality' on Google will see Eliezer's Twelve Virtues of Rationality and LW. It would nice if searching for rationality on Amazon also led people to similar resources that could be read on the Kindle with just one click. It would considerably expand the audience of potential readers (and LW contributors and SIAI donors).
I wrote a short userscript1 that allows for jumping to the next (or previous) new comment in a page (those marked with green). I have tested it on Firefox nightly with the Greasemonkey addon and Chromium. Unfortunately, I think that user scripts only work in Chromium/Google Chrome and Firefox (with Greasemonkey).
Download here (Clicking the link should offer a install prompt, and that is all the work that needs to be done.)
It inserts a small box in the lower right-hand corner that indicates the number of new messages and has a "next" and a "previous" link like so:
Clicking either link should scroll the browser to the top of the appropriate comment (wrapping around at the top and bottom).
The "!" link shows a window for error logging. If a bug occurs, clicking the "Generate log" button inside this window will create a box with some information about the running of the script2, copying and pasting that information here will make debugging easier.
I have only tested on the two browsers listed above, and only on Linux, so feedback about any bugs/improvements would be useful.
(Technical note: It is released under the MIT License, and this link is to exactly the same file as above but renamed so that the source can be viewed more easily. The file extension needs to be changed to "user.js" to be able to run as a user script properly)
v0.1 - First version
v0.2 - Logging & indication of number of new messages
v0.3 - Correctly update when hidden comments are loaded (and license change). NOTE: Upgrading to v0.3 on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (uninstall via Tools > Extensions)
2 Specifically: the url, counts of different sets of comments, some info about the new comments, and also a list of the clicks on "prev" and "next".
Suppose that you're a bee. Perhaps, even, an extremely rational bee. And yet, as you go through your life, you can't shake the feeling that you're missing something - the other bees live so effortlessly, alighting on flowers bursting with pollen as if by chance. Try as you might, you can't seem to figure out the patterns that they're unconsciously drawn to. Are you overanalyzing? Are you overwhelmed by sensory data? But the others seem to defy thermodynamics in their ability to extract useful information, all the while wasting so much effort on suboptimal patterns of thought.
Perhaps they have access to different data? Perhaps, where you see a uniform field of yellow, they see bullseyes.
Less Wrong seems to have a problem with socializing. Not just an unusual share of the people, but the community's character (as if it were a person). We should suspect ourselves (as a collective) of overlooking the ultraviolet, those facts about the world that are so easily accessed by some others. We should be suspicious of simplistic or monolithic explanations of social reality that don't allow sweeping social success on the same scale as their claims. We should be suspicious of dismissals of social concerns.
Am I off the mark? Am I worried over nothing? Am I overreaching? I am tossing this idea out into the sandstorm of doubt so that it can be worn down and honed to the razor edge at its core, if such a thing exists. I ask you to be my wind and sand.
Disclaimers: I don't intend this as an insult. It's a reminder - as a collective intelligence, we have a blind spot. We shouldn't conclude that there's nothing behind it. I myself am pretty dang "manualistic" (or whatever the other side of neurotypical is called). I am not an apiarist.
Edit: I've removed the focus on Autism. I was wrong, and I apologize. The post may be further edited in the near future.
(Is Bayesianism even a word? Should it be? The suffix "ism" sets off warning lights for me.)
Visitors to LessWrong may come away with the impression that they need to be Bayesians to be rational, or to fit in here. But most people are a long way from the point where learning Bayesian thought patterns is the most time-effective thing they can do to improve their rationality. Most of the insights available on LessWrong don't require people to understand Bayes' Theorem (or timeless decision theory).
I'm not calling for any specific change. Just to keep this in mind when writing things in the Wiki, or constructing a rationality workbook.
When the sequences were copied from Overcoming Bias to Less Wrong, it looks like something went very wrong with the character encoding. I found the following sequences of HTML entities in words in the sequences:
Ă˘Â€Â” arbitrator?i window?and
ĂŞ b?te m?me
ĂŠ fianc?e proteg?s d?formation d?colletage am?ricaine d?sir
ĂƒÂŻ na?ve na?vely
Ã¶ Schr?dinger L?b
ĂƒÂś Schr?dinger H?lldobler
Ăź D?sseldorf G?nther
â€“ ? Church? miracles?in Church?Turing
â€™ doesn?t he?s what?s let?s twin?s aren?t I?ll they?d ?s you?ve else?s EY?s Whate?er punish?d There?s Caledonian?s isn?t harm?s attack?d I?m that?s Google?s arguer?s Pascal?s don?t shouldn?t can?t form?d controll?d Schiller?s object?s They?re whatever?s everybody?s That?s Tetlock?s S?il it?s one?s didn?t Don?t Aslan?s we?ve We?ve Superman?s clamour?d America?s Everybody?s people?s you?d It?s state?s Harvey?s Let?s there?s Einstein?s won?t
ĂĄ Alm?si Zolt?n
ĂŤ pre?mpting re?valuate
Ă¨ l?se m?ne accurs?d
â†’ high?low low?high
Ä k?rik Siddh?rtha
รถ Sj?berg G?delian L?b Schr?dinger G?gel G?del co?rdinate W?hler K?nigsberg P?lzl
Â  I?understood ? I?was
â€” PEOPLE?and smarter?supporting to?at problem?and probability?then valid?to opportunity?of time?in true?I view?wishing Kyi?and ones?such crudely?model stupid?which that?larger aside?from Ironically?but intelligence?such flower?but medicine?as
â€ side?effect galactic?scale
Â´ can?t Biko?s aren?t you?de didn?t don?t it?s