LW Women: LW Online
Standard Intro
The following section will be at the top of all posts in the LW Women series.
Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post. There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.
Seven women submitted, totaling about 18 pages.
Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)
Warning- Submitters were told to not hold back for politeness. You are allowed to disagree, but these are candid comments; if you consider candidness impolite, I suggest you not read this post
To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.
Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.
(Note from me: I've been procrastinating on posting these. Sorry to everyone who submitted! But I've got them organized decently enough to post now, and will be putting one up once a week or so, until we're through)
Submitter A
I think this is all true. Note that that commenter hasn't commented since 2009.
Objectifying remarks about attractive women and sneery remarks about unattractive women are not nice. I worry that guys at less wrong would ignore unattractive women if they came to meetings. Unattractive women can still be smart! I also worry that they would only pay attention to attractive women insofar as they think they might get to sleep with them.
I find the "women are aliens" attitude that various commenters (and even Eliezer in the post I link to) seem to have difficult to deal with: http://lesswrong.com/lw/rp/the_opposite_sex/. I wish these posters would make it clear that they are talking about women on average: presumably they don't think that all men and all women find each other to be like aliens.
I find I tend to shy away from saying feminist things in response to PUA/gender posts, since there seems to be a fair amount of knee-jerk down-voting of anything feminist sounding. There also seems to be quite a lot of knee-jerk up-voting of poorly researched armchair ev-psych.
Linked to 3, if people want to make claims about men and women having different innate abilities, that is fine. However, I wish they'd make it clear when they are talking on average, i.e. "women on average are worse at engineering than men" not "women are worse at engineering than men."
A bit of me wishes that the "no mindkiller topics" rule was enforced more strictly, and that we didn't discuss sex/gender issues. I do think it is off-putting to smart women - you don't convert people to rationality by talking about such emotive topics. Even if some of the claims like "women on average are less good at engineering than men" are true* they are likely to put smart women off visiting less wrong. Not sure to what extent we should sacrifice looking for truth to attract people. I suspect many LWers would say not at all. I don't know. We already rarely discuss politics, so would it be terrible to also discuss sex/gender issues as little as possible?
I agree with Luke here
*and I do think some of them are true
***
Submitter B
My experience of LessWrong is that it feels unfriendly. It took me a long time to develop skin thick enough to tolerate an environment where warmth is scarce. I feel pretty certain that I've got a thicker skin than most women and that the environment is putting off other women. You wouldn't find those women writing an LW narrative, though - the type of women I'm speaking of would not have joined. It's good to open a line of communication between the genders, but by asking the women who stayed, you're not finding out much about the women who did not stay. This is why I mention my thinner-skinned self.
What do I mean by unfriendly? It feels like people are ten thousand times more likely to point out my flaws than to appreciate something I said. Also, there's next to no emotional relating to one another. People show appreciation silently in votes, and give verbal criticism, and there are occasionally compliments, but there seems to be a dearth of friendliness. I don't need instant bonding, but the coldness is thick. If I try to tell by the way people are acting, I'm half convinced that most of the people here think I'm a moron. I'm thick skinned enough that it doesn't get to me, but I don't envision this type of environment working to draw women.
Ive had similar unfriendly experiences in other male-dominated environments like in a class of mostly boys. They were aggressive - in a selfish way, as opposed to a constructive one. For instance, if the teacher was demonstrating something, they'd crowd around aggressively trying to get the best spots. I was much shorter, which makes it harder to see. This forced me to compete for a front spot if I wanted to see at all, and I never did because I just wasn't like that. So that felt pretty insensitive. Another male dominated environment was similarly heavy on the criticism and light on niceness.
These seem to be a theme in male-dominated environments which have always had somewhat of a deterring effect on me: selfish competitive behavior (Constructive competition for an award or to produce something of quality is one thing, but to compete for a privilege in a way that hurts someone at a disadvantage is off-putting), focus on negative reinforcement (acting like tough guys by not giving out compliments and being abrasive), lack of friendliness (There can be no warm fuzzies when you're acting manly) and hostility toward sensitivity.
One exception to this is Vladimir_Nesov. He has behaved in a supportive and yet honest way that feels friendly to me. ShannonFriedman does "honest yet friendly" well, too.
A lot of guys I've dated in the last year have made the same creepy mistake. I think this is likely to be relevant because they're so much like LW members (most of them are programmers, their personalities are very similar and one of them had even signed up for cryo), and because I've seen some hints of this behavior on the discussions. I don't talk enough about myself here to actually bring out this "creepy" behavior (anticipation of that behavior is inhibiting me as well as not wanting to get too personal in public) so this could give you an insight that might not be possible if I spoke strictly of my experiences on LessWrong.
The mistake goes like this:
I'd say something about myself.
They'd disagree with me.
For a specific example, I was asked whether I was more of a thinker or feeler and I said I was pretty balanced. He retorted that I was more of a thinker. When I persist in these situations, they actually argue with me. I am the one who has spent millions of minutes in this mind, able to directly experience what's going on inside of it. They have spent, at this point, maybe a few hundred minutes observing it from the outside, yet they act like they're experts. If they said they didn't understand, or even that they didn't believe me, that would be workable. But they try to convince me I'm wrong about myself. I find this deeply disturbing and it's completely dysfunctional. There's no way a person will ever get to know me if he won't even listen to what I say about myself. Having to argue with a person over who I am is intolerable.
I've thought about this a lot trying to figure out what they're trying to do. It's never going to be a sexy "negative hit" to argue with me about who I am. Disagreeing with me about myself can't possibly count as showing off their incredible ability to see into me because they're doing the exact opposite: being willfully ignorant. Maybe they have such a need to box me into a category that they insist on doing so immediately. Personalities don't fit nicely in categories, so this is an auto-fail. It comes across as if they're either deluded into believing they're some kind of mind-reading genius or that they don't realize I'm a whole, grown-up human being complete with the ability to know myself. This has happened on the LessWrong forum also.
I have had a similar problem that only started to make sense after considering that they may have been making a conscious effort to develop skepticism: I had a lot of experiences where it felt like everything I said about myself was being scrutinized. It makes perfect sense to be skeptical about other conversation topics, but when they're skeptical about things I say about myself, this is ingratiating. This is because it's not likely that either of us will be able to prove or disprove anything about my personality or subjective experiences in a short period of time, and possibly never. Yet saying nothing about ourselves is not an option if we want to get to know each other better. I have to start somewhere.
It's almost like they're in such a rush to have definitive answers about me that they're sabotaging their potential to develop a real understanding of me. Getting to know people is complicated - that's why it takes a long time. Tearing apart her self-expressions can't save you from the ambiguity.
I need "getting to know me" / "sharing myself" type conversations to be an exploration. I do understand the need to construct one's own perspective on each new person. I don't need all my statements to be accepted at face value. I just want to feel that the person is happily exploring. They should seem like they're having fun checking out something interesting, not interrogating me and expecting to find a pile of errors. Maybe this happens because of having a habit of skeptical thinking - they make people feel scrutinized without knowing it.
Official LW uncensored thread (on Reddit)
http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/
This is meant as an open discussion thread someplace where I won't censor anything (and in fact can't censor anything, since I don't have mod permissions on this subreddit), in a location where comments aren't going to show up unsolicited in anyone's feed (which is why we're not doing this locally on LW). If I'm wrong about this - i.e. if there's some reason that Reddit LW followers are going to see comments without choosing to click on the post - please let me know and I'll retract the thread and try to find some other forum.
I have been deleting a lot of comments from (self-confessed and publicly designated) trolls recently, most notably Dmytry aka private-messaging and Peterdjones, and I can understand that this disturbs some people. I also know that having an uncensored thread somewhere else is probably not your ideal solution. But I am doing my best to balance considerations, and I hope that having threads like these is, if not your perfect solution, then something that you at least regard as better than nothing.
[Meta] Server Slow
Is it just me or has the server being unusually slow the past couple of days? During particularly bad times I'm even getting various HTTP errors.
[minor] Separate Upvotes and Downvotes Implimented
It seems that if you look at the column on the right of the page, you can see upvotes and downvotes separately for recent posts. The same [n, m] format is displayed for recent comments, but it doesn't seem to actually sync with the score displaying on the comment. This feature only seems available on the sidebar: looking at the actual comment or post doesn't give you this information.
Thanks, whoever did this!
LW anchoring experiment: maybe
I do an informal experiment testing whether LessWrong karma scores are susceptible to a form of anchoring based on the first comment posted; a medium-large effect size is found although the data does not fit the assumed normal distribution & the more sophisticated analysis is equivocal, so there may or may not be an anchoring effect.
Full writeup on gwern.net at http://www.gwern.net/Anchoring
META: Deletion policy
http://wiki.lesswrong.com/wiki/Deletion_policy
This is my attempt to codify the informal rules I've been working by.
I'll leave this post up for a bit, but strongly suspect that it will have to be deleted not too long thereafter. I haven't been particularly encouraged to try responding to comments, either. Nonetheless, if there's something I missed, let me know.
New censorship: against hypothetical violence against identifiable people
New proposed censorship policy:
Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.
Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.
More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).
This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'
Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.
Buffalo Meetup: Survey of Interest
I'd like to start a LW meetup group in Buffalo, NY and would like to get an idea of how many people may be interested in attending. I'm hoping to get meetups started sometime in January. If you're interested, email me at BuffaloLW@gmail.com (and comment below). Anyone who sends me an email will receive a link to the event on Doodle.com to try and work out a time and day of the week that works for most people.
Also, where would you like the first meeting to be held?
1. Private Residence (my house, or you can offer yours if you like)
2. Public Space (like Spot Coffee?)
3. Don't Care
Edit: I realized based on Alicorn's interest that there may be decent amount of people traveling to the area for the holidays who are interested in meeting during the holiday break. If you are one of these people, comment below because I would love to host you.
How to incentivize LW wiki edits?
How can we incentivize more productive activity on the LW wiki? There are many articles that could be created or expanded.
Are there previous discussions on this?
One suggestion: We could add a "good explanation!" button at the bottom of each article, and every time it is clicked, the user account responsible for the plurality of words in the current draft of the article gets 10 karma points. This requires that LW and LW-wiki accounts be synced, first.
How can this suggestion be improved?
What other suggestions do people have?
2012 Less Wrong Census Survey: Call For Critiques/Questions
The first draft of the 2012 Less Wrong Census/Survey is complete (see 2011 here). I will link it below if you promise not to try to take the survey because it's not done yet and this is just an example!
2012 Less Wrong Census/Survey Draft
I want three things from you.
First, please critique this draft. Tell me if any questions are unclear, misleading, offensive, confusing, or stupid. Tell me if the survey is so unbearably long that you would never possibly take it. Tell me if anything needs to be rephrased.
Second, I am willing to include any question you want in the Super Extra Bonus Questions section, as long as it is not offensive, super-long-and-involved, or really dumb. Please post any questions you want there. Please be specific - not "Ask something about abortion" but give the exact question you want me to ask as well as all answer choices.
Try not to add more than five or so questions per person, unless you're sure yours are really interesting. Please also don't add any questions that aren't very easily sort-able by a computer program like SPSS unless you can commit to sorting the answers yourself.
Third, please suggest a decent, quick, and at least somewhat accurate Internet IQ test I can stick in a new section, Unreasonably Long Bonus Questions.
I will probably post the survey to Main and officially open it for responses sometime early next week.
[LINK] Law Goes Meta
Some legal background:
- In the United States, there are several courts of appeals, called Circuit Courts. They can disagree about legal points - this is called a circuit split. One of the purposes of the Supreme Court is to resolve circuit splits.
- Sometimes, laws are ruled to be ambiguous. If so, the relevant agency regulations interpreting the law are determinative, unless the regulations are an obviously stupid interpretation. This is called Chevron deference.
One would think that disagreement between Circuits about the meaning of a law would be legally relevant evidence about whether the law was ambiguous. Instead, there appears to be a circuit split on the meaning of circuit splits.
More available here, for the amusement of those on this site who like to think meta. Also a bit of a lesson on the limits of meta-style analysis in solving actual problems.
Meta: What tool turns rich text into clean HTML?
If you write an article in Word, Writer, Scrivener, Google Docs, or another rich text editor, and then copy+paste that rich text into an online WYSIWYG editor like the one on Less Wrong or WordPress, the HTML generated by LW or WordPress is incredibly messy and does tons of weird stuff to your text.
Because of this, I've taken to composing all my posts in Markdown, which is plain text (like HTML) but easier to read, and can be easily converted to clean HTML.
Ideally, though, authors would be able to compose articles in whatever editor they want, and then paste their rich text into a simple web tool that strips all formatting from the HTML except the formatting they want to keep.
HTML Purifier, TIDY, and HTML Tidy aren't quite what we need. Word2CleanHTML, Word HTML Cleaner and WordOff, along with CKEditor's and TinyMCE's 'Paste from Word' features, kinda work, but not really: they still make mistakes pretty often when I try them.
What I was hoping to find was something like Word2CleanHTML but with three changes:
- Does a good job when pasting from just about any rich text editor, not just Word.
- Allows the user to choose which formatting to keep, using a list of checkboxes for bold, italic, strikethrough, headings, text coloring, blockquotes, etc.
Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments?
In light of recent (and potential) events, I wanted to start a discussion here about a certain method of handling conflicts on this site's discussion threads, and hopefully form a consensus on when to use the measure described in the title. Even if the discussion has no impact on site policy ("executive veto"), I hope administrators will at least clarify when such a measure will be used, and for what reason.
I also don't want to taint or "anchor" the discussion by offering hypothetical situations or arguments for one position or another. Rather, I simply want to ask: Under what conditions should a specific poster, "Alice" be prohibited from replying directly to the arguments in a post/comment made by another poster, "Bob"? (Note: this is referring specifically to replies to ideas and arguments Bob has advanced, not general comments about Bob the person, which should probably go under much closer scrutiny because of the risk of incivility.)
Please offer your ideas and thoughts here on when this measure should be used.
Random LW-parodying Statement Generator
So, I were looking at this, and then suddenly this thing happened.
EDIT:
New version! I updated the link above to it as well. Added LOADS and LOADS of new content, although I'm not entirely sure if it's actually more fun (my guess is there's more total fun due to varity, but that it's more diluted).
I ended up working on this basically the entire day to day, and implemented practically all my ideas I have so far, except for some grammar issues that'd require disproportionately much work. So unless there are loads of suggestions or my brain comes up with lots of new ideas over the next few days, this may be the last version in a while and I may call it beta and ask for spell-check. Still alpha as of writing this thou.
Since there were some close calls already, I'll restate this explicitly: I'd be easier for everyone if there weren't any forks for at least a few more days, even ones just for spell-checking. After that/I move this to beta feel more than free to do whatever you want.
Thanks to everyone who commented! ^_^
old Source, old version, latest source
Credits: http://lesswrong.com/lw/d2w/cards_against_rationality/ , http://lesswrong.com/lw/9ki/shit_rationalists_say/ , various people commenting on this article with suggestions, random people on the bay12 forums that helped me with the engine this is a descendent from ages ago.
Kaj Sotala's Posts
Here's an index of Kaj_Sotala's articles (not including meta posts):
- The Curse of Identity (108)
- The Psychological Diversity of Mankind (71)
- What is Bayesianism? (67)
- The Substitution Principle (64)
- Avoid Misinterpreting Your Emotions (61)
- Consistently Inconsistent (57)
- Fallacies as Weak Bayesian Evidence (54)
- I Was Not Almost Wrong But I Was Almost Right (50)
- Problems in Evolutionary Psychology (50)
- What Cost for Irrationality (50)
- Your Intuitions are not Magic (49)
- Levels of Communication (49)
- A Taxonomy of Bias: The Cognitive Miser (48)
- Suffering as Attentional-allocational Conflict (46)
- It's Okay To Be At Least A Little Irrational (45)
- How to Run a Successful Less Wrong Meetup (44)
- Controlling Your Inner Control Circuits (43)
- How to Always Have Interesting Conversations (42)
- Compartmentalization as a Passive Phenomenon (42)
- Fundamentally Flawed or Fast and Frugal (40)
- Thoughts on Moral Intuitions (38)
- Pain and Gain Motivation (37)
- Overcoming Suffering: Emotional Acceptance (35)
- What Intelligence Tests Miss the Psychology Of Rational Thought (34)
- Are These Cognitive Biases, Biases? (34)
- The Tragedy of the Anticommons (32)
- Strategic Ignorance and Plausible Deniability (31)
- What Data Generated That Thought? (30)
- A Rational Identity (30)
- SIAI vs. FHI Achivements (2008-2010) (27)
- You Cannot be Mistaken About (not) Wanting to Wirehead (27)
- Why No Archive of Refuted Research? (25)
- Applying Utility Functions to Humans Considered Harmful (25)
- Rationalists Should Beware Rationalism (23)
- Modularity and Buzzy (23)
- Applied Bayes' Theorem: Reading People (22)
- 5-second Level Case Study: Value of Information (21)
- Intelligence Explosion vs. Co-operative Explosion (20)
- Heuristics and Biases in Charity (20)
- Deliberate and Spontaneous Creativity (20)
- Does Blind Review Slow Down Science? (20)
- To Like Each Other Sing and Dance in Synchrony (19)
- Intuitive Differences: When to Agree to Disagree (18)
- Declare Your Signaling and Hidden Agendas (17)
- Modularity Signaling and Belief in Belief (16)
- Smart Non-Reductionists, Philosophical vs. Engineering Mindsets, and Religion (13)
- A Taxonomy of Bias: Mindware Problems (13)
- What Epistemic Hygiene Norms Should There Be? (11)
- Ethics as a Black Box Function (11)
- A Social Norm Against Unjustified Opinions (10)
- The Concepts Problem (09)
- The Twin Webs of Knowledge (05)
Debugging the Quantum Physics Sequence
This article should really be called "Patching the argumentative flaw in the Sequences created by the Quantum Physics Sequence".
There's only one big thing wrong with that Sequence: the central factual claim is wrong. I don't mean the claim that the Many Worlds interpretation is correct; I mean the claim that the Many Worlds interpretation is obviously correct. I don't agree with the ontological claim either, but I especially don't agree with the epistemological claim. It's a strawman which reduces the quantum debate to Everett versus Bohr - well, it's not really Bohr, since Bohr didn't believe wavefunctions were physical entities. Everett versus Collapse, then.
I've complained about this from the beginning, simply because I've also studied the topic and profoundly disagree with Eliezer's assessment. What I would like to see discussed on this occasion is not the physics, but rather how to patch the arguments in the Sequences that depend on this wrong sub-argument. To my eyes, this is a highly visible flaw, but it's not a deep one. It's a detail, a bug. Surely it affects nothing of substance.
However, before I proceed, I'd better back up my criticism. So: consider the existence of single-world retrocausal interpretations of quantum mechanics, such as John Cramer's transactional interpretation, which is descended from Wheeler-Feynman absorber theory. There are no superpositions, only causal chains running forward in time and backward in time. The calculus of complex-valued probability amplitudes is supposed to arise from this.
The existence of the retrocausal tradition already shows that the debate has been represented incorrectly; it should at least be Everett versus Bohr versus Cramer. I would also argue that when you look at the details, many-worlds has no discernible edge over single-world retrocausality:
- Relativity isn't an issue for the transactional interpretation: causality forwards and causality backwards are both local, it's the existence of loops in time which create the appearance of nonlocality.
- Retrocausal interpretations don't have an exact derivation of the Born rule, but neither does many-worlds.
- Many-worlds finds hope of such a derivation in a property of the quantum formalism: the resemblance of density matrix entries to probabilities. But single-world retrocausality finds such hope too: the Born probabilities can be obtained from the product of ψ with ψ*, its complex conjugate, and ψ* is the time reverse of ψ.
- Loops in time just fundamentally bug some people, but splitting worlds have the same effect on others.
I am not especially an advocate of retrocausal interpretations. They are among the possibilities; they deserve consideration and they get it. Retrocausality may or may not be an element of the real explanation of why quantum mechanics works. Progress towards the discovery of the truth requires exploration on many fronts, that's happening, we'll get there eventually. I have focused on retrocausal interpretations here just because they offer the clearest evidence that the big picture offered by the Sequence is wrong.
It's hopeless to suggest rewriting the Sequence, I don't think that would be a good use of anyone's time. But what I would like to have, is a clear idea of the role that "the winner is ... Many Worlds!" plays in the overall flow of argument, in the great meta-sequence that is Less Wrong's foundational text; and I would also like to have a clear idea of how to patch the argument, so that it routes around this flaw.
In the wiki, it states that "Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor), epistemology, reductionism, naturalism, and philosophy of science." So there we have it - a synopsis of the function that this Sequence is supposed to perform. Perhaps we need a working group that will identify each of the individual arguments, and come up with a substitute for each one.
Less Wrong articles categorized by reading difficulty
One thing that could help new users dive into Less Wrong would be to make some reading recommendations based on reading difficulty. (I'm including some things not hosted on LessWrong.com when they're very LessWrong-ish and written by leading LessWrong authors.) For example:
For everyone
- Yudkowsky, Harry Potter and the Methods of Rationality
- Yudkowsky, Twelve Virtues of Rationality
- Yvain, The Worst Argument in the World
- Yudkowsky, Reductionism
- Lukeprog, How to Beat Procrastination
- Yudkowsky, Technical Explanation of Technical Explanation
- Yudkowsky, Timeless Causality
- Yudkowsky, Bell's Theorem
[META] Karma for last 30 days?
Has anyone yet mentioned or reported that for the last couple days, the "karma for last 30 days" is showing zero for everyone? And that we no longer can see the top contributors for the last 30 days either?
Do we have an explanation or estimation for a bugfix on this?
PSA: People can see what you've "liked" and "disliked" if you checked "Make my votes public"
I think this feature might have been broken a while ago, but it works now. So if you don't want your likes and dislikes to be public, go to your preferences page and uncheck "Make my votes public." At the moment, the upvotes and downvotes of many prominent users are visible by clicking their username and then clicking "Liked" or "Disliked".
That is all.
[META] Inbox icon behaving unexpectedly
I just saw the letter icon under my username in orange, indicating that there should be something new in my inbox; but when I went to my inbox there was nothing there I hadn't already read. I wonder if this could be related to the recent trouble with the PM system? I sent a PM the other day and might have gotten a response to it which triggered the colour-the-icon code but not, perchance, the actual display-in-inbox code. Can I get a volunteer to receive a PM from me, or to PM me, and test whether the response shows up in the sender's inbox?
Issue 301 shipped: Show parent comments on /comments
See http://code.google.com/p/lesswrong/issues/detail?id=301 for detail.
Go to your Preferences page and " /comments" to take advantage of this feature.
(Work done by John Simon, integrated by User:wmoore.)
New "Best" comment sorting system
Way back in October 2009 Reddit introduced their "Best" comment sorting system. We've just pulled those changes into Less Wrong. The changes affect only comments, not stories.
It's good. It should significantly improve the visibility of good comments posted later in the life of an article. You (yes you) should adopt it. It's the default for new users.
See http://blog.reddit.com/2009/10/reddits-new-comment-sorting-system.html for the details.
Seeking a "Seeking Whence 'Seek Whence'" Sequence
One of the sharpest and most important tools in the LessWrong cognitive toolkit is the idea of going meta, also called seeking whence or jumping out of the system, all terms crafted by Douglas Hofstadter. Though popularized by Hofstadter and repeatedly emphasized by Eliezer in posts like "Lost Purposes" and "Taboo Your Words", Wikipedia indicates that similar ideas have been around in philosophy since at least Anaximander in the form of the Principle of Sufficient Reason (PSR). I think it'd be only appropriate to seek whence this idea of seeking whence, taking a history of ideas perspective. I'd also like analyses of where the theme shows up and why it's appealing and so on, since again it seems pretty important to LessWrong epistemology. Topics that I'd like to see discussed are:
- How conservation of probability in Bayesian probability theory and conservation of phase space volume in statistical mechanics are related—a summary of Eliezer's posts on the topic would be great.
- How conservation of probability &c. are related to other physical/mathematical laws, e.g. Noether's theorem and quantum mechanics' continuity equation.
- The history of the idea of conservation laws; whether the discovery of conservation laws was fueled by PSR-like philosophical-like concerns (e.g. Leibniz?), by lower level intuitive concerns, or other means.
- How conservation of probability &c. are related to the idea of seeking whence [pdf] (e.g., "follow the improbability").
- How the PSR relates to conservation of probability &c. and to seeking whence.
- How going meta and seeking whence are related/equivalent.
- Which philosophers have used something like the PSR (e.g. Spinoza, Leibniz) and which haven't; those who haven't, what their reasons were for not using it.
- What kinds of conclusions are typically reached via the PSR or have historically been justified by the PSR, and whether those conclusions fit with LW's standard conclusions. If it disagrees with LW's standard conclusions, where does the PSR not apply or not apply as strongly; alternatively, why standard LW conclusions might be mistaken.
- Whether Schopenhauer's four-fold division of the PSR makes sense. (Schopenhauer's a relatively LW-friendly continentalesque philosopher.) A summary of any criticisms of his four-fold division.
- What makes the PSR, going meta, "JOOTS"-ing and seeking whence appealing, from a metaphysical, epistemological, pragmatic, and psychological perspective. What sorts of environments or problem sets select for it. (The Baldwin effect and similar phenomena might be relevant.)
- What going meta / seeking whence looks like at different levels of organization; how one jumps out of systems at varying levels.
- Eliezer's rule of derivative validity from CFAI and how it relates to the PSR; an analysis of how the (moral, or perhaps UDT-like decision-policy-centric) PSR might be relevant to Friendliness philosophy, e.g. as compared with CEV-like proposals [pdf].
- How latent Platonic nodes in TDT [pdf] (p. 78) relate to the PSR.
- A generalization of CFAI's causal validity semantics to timeless validity semantics in the spirit of the generalization of CDT to TDT, or perhaps even further generalizations of causal validity semantics in the spirit of Updateless Decision Theory or eXceptionless Decision Theory. (ETA: Whoops, Eliezer already discussed the acausal level, but seems to have only mentioned Platonic forms as an afterthought. Maybe ignore this bullet point.)
- How the PSR and the rule of derivative validity relate to Robin Hanson's idea of pre-rationality and Wei Dai's questions about extending pre-rationality to include past selves' utility functions—whether this elucidates the relation between XDT and UDT.
- Where Hofstadter picked up the idea of "going meta" and what led him to think it was important. What led Eliezer to rely on it so much and emphasize the importance of avoiding lost purposes.
What is the best way to read the sequences?
I am a relative new-comer to LW, and I have read ~half the articles in the core sequences, but I think I have not been optimizing for comprehension/retention while reading them. Could some more experienced members of the community give some recommendations on how best to read them \(both in terms of in which posts in which order and in terms of the actual reading process\)?
I will periodically edit this post to include some of the suggestions in order to provide the most benefit to future newcomers.
EDIT: Here are the most popular suggestions:
Order:
- Chronological order (5 or 6 comments)
- Whatever method you will actually do or already doing (1.5 comments)
How to read them:
- e-reader or smartphone (3 comments)
- Text-to-speech (1 comment)
- Read How to Read a Book and implement its suggestions (1 comment) (note: the link is a 9-page PDF, so it shouldn't be too hard to read and see what's useful)
- Mind-mapping (1 comment)
- Try to guess what the links go to (1 comment)
Intellectual insularity and productivity
Guys I'd like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
The community seems to not update on ideas and concepts that didn't originate here. The only major examples fellow LWers brought up in conversation where works that Eliezer cited as great or influential. :/
Another thing, I could be wrong about this naturally, but it seems to clear that LessWrong has not grown. I'm not talking numerically. I can't put my finger to major progress done in the past 2 years. I have heard several other users express similar sentiments. To quote one user:
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general.
I've recently come to think this is probably true to the first approximation. I was checking out a blogroll and saw LessWrong listed as Eliezer's blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since the author doesn't make new updates any more. Originally the man had high hopes for the site. He wanted to build something that could keep going on its own, growing without him. It turned out to be a community mostly dedicated to studying the scrolls he left behind. We don't even seem to do a good job of getting others to read the scrolls.
Overall there seems to be little enthusiasm for actually systematically reading the old material. I'm going to share my take on what is I think a symptom of this. I was debating which title to pick for my first ever original content Main article (it was originally titled "On Conspiracy Theories") and made what at first felt like a joke but then took on a horrible ring of:
Over time the meaning of an article will tend to converge with the literal meaning of its title.
We like linking articles, and while people may read a link the first time, they don't tend to read it the second or third time they run across it. The phrase is eventually picked up and used out the appropriate of context. Something that was supposed to be shorthand for a nuanced argument starts to mean exactly what "it says". Well not exactly, people still recall it is a vague applause light. Which is actually worse.
I cited precisely "Politics is the Mindkiller" as an example of this. In the original article Eliezer basically argues that gratuitous politics, political thinking that isn't outweighed by its value to the art of rationality, is to be avoided. This soon came to meant it is forbidden to discuss politics in Main and Discussion articles, though it does live in the comment sections.
Now the question if LessWrong remains productive intellectually, is separate from the question of it being insular. But I feel both need to be discussed. If our community wasn't growing and it wasn't insular either, it could at least remain relevant.
This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
Proposal: Show up and down votes separately
One of the most interesting things about this site is the karma scoring, and that it reflects (to a greater degree than you see elsewhere) an objective assessment of the merits of an argument.
[Edit^6: the proposal in this post is related to the Kibitzer system, but this post discusses adding information, while that system concentrates on taking information away. Special thanks for matt's comment and to Vincentyu for being the first to point to prior discussion. A related issue is discussed here (2009) with reference to a wikipedia, and on which Eliezer said "I may end up linking this from the About page when it comes time to explain suggested voting policies"). Data: It took me ~2 days of effort to obtain get linked to this information (09 June 2012 11:29PM -> 11 June 2012 10:28:26PM).]
Suppose a controversial post/comment has six up votes and three down votes. Right now we only see the net result: 3 points, but when the voting is mixed we're losing important information. If it's reasonably easy to implement, could we please show up and down tallies separately? E.g show "3 points (+6,-3)", at least when the voting is mixed? I think the negative votes are the single most important thing. In particular, I want to know about negative votes I receive and where I receive them, because those are the posts where I need to think carefully.
Example: here's a welcome post by syzygy, which relates to Eliezer's post about Politics as the Mind Killer. I know that it's controversial, because I can sort by controversial and it shows up high on the welcome post thread (neat feature!), but I can't tell how many down votes it has. Does syzygy commit a fallacy? (I don't mean to pick on you, sorry about that; I liked your post.)
Of course this change wouldn't fix everything. If a post has "-1 points (+0,-1)", that doesn't mean only one person read it and disapproved; maybe 100s read it and thought it was bad, but saw that it already had -1 net and considered that sufficiently punitive. This is pretty good; we don't want to spend all our time fiddling with scores.
I mean if we wanted to get fancy and use Bayesian inspired scoring, we could let everyone who wishes assign a score (say from -5 to 5) and report posterior summaries of the scores. Or, more importantly if we value objective scoring, we could identify posts that are controversial and we could have the system randomly select users with respectable karma, and assign them to give their score on the post. Such a score would be valid in a way that the current "convenience" scores are not. Additionally, posts could be scored on multiple axes: soundness of argument, potential impact, innovation, whether we agree with the normative basis of a judgement, etc....
But I'm not arguing for a complicated change, just a simple less wrong one.
Other than feasibility concerns, or maybe aesthetics, the strongest argument I can see against this proposal is that we might embarrass or shame users. Can any one give an example where that might be a concern? I figure that since we already show negative scores, users have gotten over most of that inhibition, but I'm new here.
Another possible criticism is that it's a non-issue: almost all posts are all plus or all minus, so it's not worth the effort. I disagree with this one because I think the posts where we have mixed judgements are the most important ones to get right.
EDIT: Wouldn't it be nice to know how many down votes this post has?
Focus on rationality
(This is my view in the recent debate about posts giving a "rational" discussion of some random topic. It was originally at comment level but I've extended it and posted it in discussion because I want to know if and where people disagree with me, and for what reasons.)
I come to Less Wrong to learn about how to think and how to act effectively. I care about general algorithms that are useful for many problems, like "Hold off on proposing solutions" or "Habits are ingrained faster when you pay concious attention to your thoughts when you perform the action". These posts have very high value to me because they improve my effectiveness across a wide range of areas.
Another such technique is "Dissolving the question". Yvain's "Diseased thinking: dissolving questions about disease" is valuable as an exemplary performance of this technique. It adds to Eliezer's description of question-dissolving by giving a demonstration of its use on a real question. It's main value comes from this, anything I learnt about disease whilst reading it is just a bonus.
To quote badger in the recent thread "Rational Toothpaste: A Case Study"
I claim a post on "rational toothpaste buying" could be on-topic and useful, if correctly written to illustrate determining goals, assessing tradeoffs, and implementing the final conclusions. A post detailing the pros and cons of various toothpaste brands is for a dentistry or personal hygiene forum; a post about algorithms for how to determine the best brands or whether to do so at all is for a rationality forum.
But we don't need more than one or two such examples! Yvain's post about question-dissolving was the only such post I ever need to read.
Posts about toothpaste, house-buying, room-decoration, fashion, shaving or computer hardware only tell me about that particular thing. As good as many of them are they'll never be as useful as a post that teaches me a general method of thought applicable on many problems. And if I want to know about some particular topic I'll just look it up on Google, or go to a library.
It's not possible for LessWrong to give a rational treatment of every subject. There are just too many of them. Even if we did I wouldn't be able to carry all that info around in my head. That's why I need to learn general algorithms for producing rational decisions.
Even though badger makes it clear in the quote I gave that the post is supposed to about the algorithms used, the in the rest of the post almost all the discussion is on the object level (although the conclusion is good). That is, even though badger talks about which methods he's using and why, the focus is still on "What can these methods teach us about toothpaste?" and not "What can optimising toothpaste teach us about our methods?". I'd prefer it if posts tried to answer questions more like the latter. The comments exhibit the same phenomenon. Only one of the comments (kilobug's) is talking about the methods used. Most of the rest are actually talking about toothpaste.
So what I'm suggesting is that LessWrong posts (don't forget there's a whole internet to post things on) should focus on rationality. They can talk about other things too, but the question should always be "What can X teach us about rationality?" and not "What can rationality teach us about X?"
Only say 'rational' when you can't eliminate the word
Almost all instances of the word "true" can be eliminated from the sentences in which they appear by applying Tarski's formula. For example, if you say, "I believe the sky is blue, and that's true!" then this can be rephrased as the statement, "I believe the sky is blue, and the sky is blue." For every "The sentence 'X' is true" you can just say X and convey the same information about what you believe - just talk about the territory the map allegedly corresponds to, instead of talking about the map.
When can't you eliminate the word "true"? When you're generalizing over map-territory correspondences, e.g., "True theories are more likely to make correct experimental predictions." There's no way to take the word 'true' out of that sentence because it's talking about a feature of map-territory correspondences in general.
Similarly, you can eliminate the sentence 'rational' from almost any sentence in which it appears. "It's rational to believe the sky is blue", "It's true that the sky is blue", and "The sky is blue", all convey exactly the same information about what color you think the sky is - no more, no less.
When can't you eliminate the word "rational" from a sentence?
When you're generalizing over cognitive algorithms for producing map-territory correspondences (epistemic rationality) or steering the future where you want it to go (instrumental rationality). So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence "It's epistemically rational to increase belief in hypotheses that make successful experimental predictions." You can Taboo the word, of course, but then the sentence just becomes, "To increase map-territory correspondences, follow the cognitive algorithm of increasing belief in hypotheses that make successful experimental predictions." You can eliminate the word, but you can't eliminate the concept without changing the meaning of the sentence, because the primary subject of discussion is, in fact, general cognitive algorithms with the property of producing map-territory correspondences.
The word 'rational' should never be used on any occasion except when it is necessary, i.e., when we are discussing cognitive algorithms as algorithms.
If you want to talk about how to buy a great car by applying rationality, but you're primarily talking about the car rather than considering the question of which cognitive algorithms are best, then title your post Optimal Car-Buying, not Rational Car-Buying.
Thank you for observing all safety precautions.
[META] Recent Posts for Discussion and Main
This link
http://lesswrong.com/r/all/recentposts
gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.
Experiment: a good researcher is hard to find
See previously “A good volunteer is hard to find”
Back in February 2012, lukeprog announced that SIAI was hiring more part-time remote researchers, and you could apply just by demonstrating your chops on a simple test: review the psychology literature on habit formation with an eye towards practical application. What factors strengthen new habits? How long do they take to harden? And so on. I was assigned to read through and rate the submissions and Luke could then look at them individually to decide who to hire. We didn’t get as many submissions as we were hoping for, so in April Luke posted again, this time with a quicker easier application form. (I don’t know how that has been working out.)
But in February, I remembered the linked post above from GiveWell where they mentioned many would-be volunteers did not even finish the test task. I did, and I didn’t find it that bad, and actually a kind of interesting exercise in critical thinking & being careful. People suggested that perhaps the attrition was due not to low volunteer quality, but to the feeling that they were not appreciated and were doing useless makework. (The same reason so many kids hate school…) But how to test this?
Correcting errors and karma
An easy way to win cheep karma on LW:
- Publicly make a mistake.
- Wait for people to call you on it.
- Publicly retract your errors and promise to improve.
Our Phyg Is Not Exclusive Enough
EDIT: Thanks to people not wanting certain words google-associated with LW: Phyg
Lesswrong has the best signal/noise ratio I know of. This is great. This is why I come here. It's nice to talk about interesting rationality-related topics without people going off the rails about politics/fail philosophy/fail ethics/definitions/etc. This seems to be possible because a good number of us have read the lesswrong material (sequences, etc) which innoculate us against that kind of noise.
Of course Lesswrong is not perfect; there is still noise. Interestingly, most of it is from people who have not read some sequence and thereby make the default mistakes or don't address the community's best understanding of the topic. We are pretty good about downvoting and/or correcting posts that fail at the core sequences, which is good. However, there are other sequences, too, many of them critically important to not failing at metaethics/thinking about AI/etc.
I'm sure you can think of some examples of what I mean. People saying things that you thought were utterly dissolved in some post or sequence, but they don't address that, and no one really calls them out. I could dig up a bunch of quotes but I don't want to single anyone out or make this about any particular point, so I'm leaving it up to your imagination/memory.
It's actually kindof frustrating seeing people make these mistakes. You could say that if I think someone needs to be told about the existence of some sequence they should have read before posting, I ought to tell them, but that's actually not what I want to do with my time here. I want to spend my time reading and participating in informed discussion. A lot of us do end up engaging mistaken posts, but that lowers the quality of discussion here because so much time and space has been spent battling ignorance instead of advancing knowledge and dicussing real problems.
It's worse than just "oh here's some more junk I have to ignore or downvote", because the path of least resistance ends up being "ignore any discussion that contains contradictions of the lesswrong scriptures", which is obviously bad. There are people who have read the sequences and know the state of the arguments and still have some intelligent critique, but it's quite hard to tell the difference between that and someone explaining for the millionth time the problem with "but won't the AI know what's right better than humans?". So I just ignore it all and miss a lot of good stuff.
Right now, the only stuff I can be resonably guaranteed is intelligent, informed, and interesting is the promoted posts. Everything else is a minefield. I'd like there to be something similar for discussion/comments. Some way of knowing "these people I'm talking to know what they are talking about" without having to dig around in their user history or whatever. I'm not proposing a particular solution here, just saying I'd like there to be more high quality discussion between more properly sequenced LWers.
There is a lot of worry on this site about whether we are too exclusive or too phygish or too harsh in our expectation that people be well-read, which I think is misplaced. It is important that modern rationality have a welcoming public face and somewhere that people can discuss without having read three years worth of daily blog posts, but at the same time I find myself looking at the moderation policy of the old sl4 mailing list and thinking "damn, I wish we were more like that". A hard-ass moderator righteously wielding the banhammer against cruft is a good thing and I enjoy it where I find it. Perhaps these things (the public face and the exclusive discussion) should be separated?
I've recently seen someone saying that no-one complains about the signal/noise ratio on LW, and therefore we should relax a bit. I've also seen a good deal of complaints about our phygish exclusivity, the politics ban, the "talk to me when you read the sequences" attitude, and so on. I'd just like to say that I like these things, and I am complaining about the signal/noise ratio on LW.
Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.
LessWrong downtime 2012-03-26, and site speed
Our investigation into last week's LW downtime is complete: here (Google Docs).
Executive summary:
We failed to update our AWS configuration after changes at Amazon, which caused a cycle of servers being spawned then killed before they could properly boot. Our automated testing should have notified us of this failure immediately, but included a predictable failure mode (identified by us last year but not fixed). We became aware of the downtime when I checked my email and worked on it until it was resolved.
I personally feel very bad about our multiple failures leading to this incident.
ref. the last time I did this to you: http://lesswrong.com/lw/29v/lesswrong_downtime_20100511_and_other_recent/
Actions:
- We have reconfigured AWS and the tools we use to communicate with it to avoid this failure in the future.
- Improvements to our automated site testing system (Nagios) are underway (expected to be live before 2012-04-13 - these tests will detect greater-than-X-failures-from-Y-trials, rather than the current detect zero-successes-from-Z-trials).
- We have changed our staffing in part in recognition that some systems (including this one) had been allowed to fall out of date, and allocated a developer to review our system administration project planning.
Further actions - site speed:
We're unhappy with the site's speed. We plan on spending some time next week doing what we can to improve it.
(If you upvote this post, please downvote my "Karma sink" comment below - I would prefer not to earn karma from an event like this.)
Collaborative project: New rationality materials page
As you are probably aware, we have a new front page featuring a graphic of a brain. One of the links on the front page, "A source of edited rationality materials," links to the Less Wrong meetup group resources page. A number of users have suggested that this isn't the best page to show off to new readers, and lukeprog has requisitioned a new page to replace it.
I've volunteered to create this new page, but I'd like it to be a collaborative community project.
I'd like this new page to contain a few introductory paragraphs about Less Wrong followed by an index of some of our best content. At the moment, though, this project is still in the brainstorming phase, so this is just a tentative plan. I'd like to hear your thoughts about what the page should contain, including its content, layout, and organization. You can help out by editing the wiki page or by leaving suggestions and feedback in the comments below. Your input is always welcome, even if it's just "This is a terrible idea" or "I think this post should be in there."
Is community-collaborative article production possible?
When I showed up at the Singularity Institute, I was surprised to find that 30-60 papers' worth of material was lying around in blog posts, mailing list discussions, and people's heads — but it had never been written up in clear, well-referenced academic articles.
Why is this so? Writing such articles has many clear benefits:
- Clearly stated and well-defended arguments can persuade smart people to take AI risk seriously, creating additional supporters and collaborators for the Singularity Institute.
- Such articles can also improve the credibility of the organization as a whole, which is especially important for attracting funds from top-level social entrepreneurs and institutions like the Gates Foundation and Givewell.
- Laying out the arguments clearly and analyzing each premise can lead to new strategic insights that will help us understand how to purchase x-risk reduction most efficiently.
- Clear explanations can provide a platform on which researchers can build to produce new strategic and technical research results.
- Communicating clearly is what lets other people find errors in your reasoning.
- Communities can use articles to cut down on communication costs. When something is written up clearly, 1000 people can read a single article instead of needing to transmit the information by having several hundred personal conversations between 2-5 people.
Of course, there are costs to writing articles, too. The single biggest cost is staff time / opportunity cost. An article like "Intelligence Explosion: Evidence and Import" can require anywhere from 150-800 person-hours. That is 150-800 paid hours during which our staff is not doing other critically important things that collectively have a bigger positive impact than a single academic article is likely to have.
So Louie Helm and Nick Beckstead and I sat down and asked, "Is there a way we can buy these articles without such an egregious cost?"
We think there might be. Basically, we suspect that most of the work involved in writing these articles can be outsourced. Here's the process we have in mind:
- An SI staff member chooses a paper idea we need written up, then writes an abstract and some notes on the desired final content.
- SI pays Gwern or another remote researcher to do a literature search-and-summary of relevant material, with pointers to other resources.
- SI posts a contest to LessWrong, inviting submissions of near-conference-level-quality articles that follow the provided abstract and notes on desired final content. Contestants benefit by starting with the results of Gwern's literature summary, and by knowing that they don't need to produce something as good as "Intelligence Explosion: Evidence and Import" to win the prize. First place wins $1200, 2nd place wins $500, and 3rd place wins $200.
- Submissions are due 1 month later. Submission are reviewed, and the authors of the best submissions are sent comments on what could be improved to maximize the chances of coming in first place.
- Revised articles are due 3 weeks after comments are received. Prizes are awarded.
- SI pays an experienced writer like Yvain or Kaj_Sotala or someone similar to build up and improve the 1st place submission, borrowing the best parts from the other submissions, too.
- An SI staff member does a final pass, adding some content, making it more clearly organized and polished, etc. One of SI's remote editors does another pass to make the sentences more perfect.
- The paper is submitted to a journal or an edited volume, and is marked as being co-authored by (1) the key SI staff member who provided the seed ideas and guided each stage of the revisions and polishing, (2) the author of the winning submission, and (3) Gwern. (With thanks to contributions from the other contest participants whose submissions were borrowed from — unless huge pieces were borrowed, in which case they may be counted as an additional co-author.)
If this method works, each paper may require only 50-150 hours of SI staff time per paper — a dramatic improvement! But this method has additional benefits:
- Members of the community who are capable of doing one piece of the process but not the other pieces get to contribute where they shine. (Many people can write okay-level articles but can't do efficient literature searches or produce polished prose, etc.)
- SI gets to learn more about the talent that exists in its community which hadn't yet been given the opportunity to flower. (We might be able to directly outsource future work to contest participants, and if one person wins three such contests, that's an indicator that we should consider hiring them.)
- Additional paid "jobs" (by way of contest money) are created for LW rationalists who have some domain expertise in singularity-related subjects.
- Many Less Wrongers are students in fields relevant to the subject matter of the papers that will be produced by this process, and this will give them an opportunity to co-author papers that can go on their CV.
- The community in general gets better at collaborating.
This is, after all, more similar to how many papers would be produced by university departments, in which a senior researcher works with a team of students to produce papers.
Feedback? Interest?
(Not exactly the same, but see also the Polymath Project.)
Meta Addiction
I was wondering if anyone has ever had the feeling, like I get sometimes, that they were addicted to 'meta-level' optimizing rather than low-level acting? As in, I'd rather think about how to encourage myself to brush my teeth more than brush my teeth. I'm guessing there's something about this under the akrasia threads?
The motivations to remain in meta and thinking about things rather than acting on them seems to be that it takes less effort to think about doing things than to do them, and there is potentially more long-term benefit in making an overall improvement than in engaging in a specific action. The drawback is that if you remain thinking about meta all the time, you won't get anything done.
How does real world expected utility maximization work?
I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.
As a real world example I would like to use the post 'Epistle to the New York Less Wrongians' by Eliezer Yudkowsky and his visit to New York.
How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?
It seems that the first thing he would have to do is to figure out what he really wants, his preferences1, right? The next step would be to formalize his preferences by describing it as a utility function and assign a certain number of utils2 to each member of the set, e.g. his own survival. This description would have to be precise enough to figure out what it would mean to maximize his utility function.
Now before he can continue he will first have to compute the expected utility of computing the expected utility of computing the expected utility of computing the expected utility3 ... and also compare it with alternative heuristics4.
He then has to figure out each and every possible action he might take, and study all of their logical implications, to learn about all possible world states he might achieve by those decisions, calculate the utility of each world state and the average utility of each action leading up to those various possible world states5.
To do so he has to figure out the probability of each world state. This further requires him to come up with a prior probability for each case and study all available data. For example, how likely it is to die in a plane crash, how long it would take to be cryonically suspended from where he is in case of a fatality, the crime rate and if aliens might abduct him (he might discount the last example, but then he would first have to figure out the right level of small probabilities that are considered too unlikely to be relevant for judgment and decision making).
I probably miss some technical details and got others wrong. But this shouldn't detract too much from my general request. Could you please explain how Less Wrong style rationality is to be applied practically? I would also be happy if you could point out some worked examples or suggest relevant literature. Thank you.
I also want to note that I am not the only one who doesn't know how to actually apply what is being discussed on Less Wrong in practice. From the comments:
You can’t believe in the implied invisible and remain even remotely sane. [...] (it) doesn’t just break down in some esoteric scenarios, but is utterly unworkable in the most basic situation. You can’t calculate shit, to put it bluntly.
None of these ideas are even remotely usable. The best you can do is to rely on fundamentally different methods and pretend they are really “approximations”. It’s complete handwaving.
Using high-level, explicit, reflective cognition is mostly useless, beyond the skill level of a decent programmer, physicist, or heck, someone who reads Cracked.
I can't help but agree.
P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.
1. What are "preferences" and how do you figure out what long-term goals are stable enough under real world influence to allow you to make time-consistent decisions?
2. How is utility grounded and how can it be consistently assigned to reflect your true preferences without having to rely on your intuition, i.e. pull a number out of thin air? Also, will the definition of utility keep changing as we make more observations? And how do you account for that possibility?
3. Where and how do you draw the line?
4. How do you account for model uncertainty?
5. Any finite list of actions maximizes infinitely many different quantities. So, how does utility become well-defined?
SI wants to hire a remote LaTeX guru
The Singularity Institute needs to hire 1-2 people who are fluent in LaTeX to help us transform past and future SI publications from looking like this to looking like this.
As with the remote researcher positions, pay is hourly and starts at $14/hr but that will rise if the product is good. You must be available to work at least 20 hrs/week to be considered.
Perks:
- Work from home, with flexible hours.
- Age and credentials are irrelevant; only the product matters.
If you're interested, contact luke@intelligence.org and describe past LaTeX work you've done, with attached PDF examples.
= 783df68a0f980790206b9ea87794c5b6)

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)