Walt Whitmanisms
The original:
Do I contradict myself? Very well then I contradict myself. I am large, I contain multitudes.
Do I make tradeoffs? Very well then I make tradeoffs. I am poor, I need to make compromises.
Do I repeat myself? Very well then, I repeat myself. You are large, you contain multitudes.
Me:
Am I signaling? Very well then, I am signaling. I am human; I am part of a tribe.
Do I contradict myself? Very well, then I contradict myself. I am large, I can beat up anyone who calls me on it. #whitmanthebarbarian
Do I have no opinion? Very well, then I have no opinion. I am small, I do not contain a team of pundits.
If you had to pick exactly 20 articles from LessWrong to provide the greatest added value for a reader, which 20 articles would you select?
In other words, I am asking you to pick "Sequences: Micro Edition" for new readers, or old readers who feel intimidated by the size and structure of Sequences. No sequences and subsequences, just 20 selected articles that should be read in the given order.
It is important to consider that some information is distributed in many articles, and some articles use information explained in previous articles. Your selection should make sense for people who have read nothing else on LW, and cannot click on hyperlinks for explanation (as if they are reading the articles on a paper, without comments). Do the introductory articles provide enough value even if you won't put the whole sequence to the selected 20? Is it better to pick examples from more topics, or focus on one?
Yes, I am hoping that reading those 20 articles would encourage the reader to read more, perhaps even the whole Sequences. But the 20 articles should provide enough value when taken alone; they should be a "food", not just an "appetizer".
It is OK to pick also those LW articles that are not part of the traditional Sequences. It is OK to suggest less than 20 articles. (Suggesting more than 20 is not OK, because the goal is to select a small number of articles that provide value without reading anything more.)
I don't know if the intention here is to debate other people's choices, but: my wife started The Simple Truth because it was the first sequence post on the list and quickly became frustrated and annoyed that it didn't seem to lead anywhere and seemed to be composed of "in jokes." She didn't try to read further into the Sequences because of the bad impression she got off this article, which is an unusually weird, long, rambling, quirky article.
I actually like The Simple Truth but I don't feel that it makes a good introduction to the Sequences. But hey, this is just one data point.
Genes are overrated, genetics is underrated
by Razib Khan
... I agree one one thing in particular: an emphasis on concrete and specific genes for traits is a motif in science journalism that can be very frustrating, and often misleading. Nevertheless, that’s not the only story. I believe our current culture greatly underestimates the power of genetics in shaping broader social patterns.
How can these be reconciled? Do not genes and genetics go together? The resolution is a simple one: when you speak of 1,000 genes, you speak of no genes. You can’t list 1,000 genes in prose, even if you know them. But using standard quantitative and behavior genetic means one can apportion variation in the population of a trait to variation in genes. 1,000 genes added together can be of great effect. The newest findings in genomics are reinforcing assertions of non-trivial heritability of many complex traits, though rendering problematic attributing that heritability to a specific set of genes.
Ovulation Leads Women to Perceive Sexy Cads as Good Dads (HT: Heartiste)
...Why do some women pursue relationships with men who are attractive, dominant, and charming but who do not want to be in relationships—the prototypical sexy cad? Previous research shows that women have an increased desire for such men when they are ovulating, but it is unclear why ovulating women would think it is wise to pursue men who may be unfaithful and could desert them. Using both college-age and community-based samples, in 3 studies we show that ovulating women perceive chari
I once asked Ryan North, via the twitters, if he was a transhumanist. He said he wouldn't accept the label, but T-Rex is obviously a transtyrannosaurist.
Some vague ideas about decision theory math floating in my head right now. Posting them in this raw state because my progress is painfully slow and maybe someone will have the insight that I'm missing.
1) thescoundrel has suggested that spurious counterfactuals can be defined as counterfactuals with long proofs. How far can we push this? Can there be a "complexity-based decision theory"?
2) Can we write a version of this program that would reject at least some spurious proofs?
3) Define problem P1 as "output an action that maximizes utility&qu...
Wikipedia experiment finished: http://www.gwern.net/In%20Defense%20Of%20Inclusionism#sins-of-omission-experiment-2
Close to zero resistance to random deletions. Most disappointing.
The Essence Of Science Explained In 63 Seconds
A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about science and our physical world. Apologies if it has been linked to before, especially since I can't say I would be surprised if it was.
...Here it is, in a nutshell: The logic of science boiled down to one, essential idea. It comes from Richard Feynman, one of the great scientists of the 20th century, who wrote it on the blackboard during a class at Cornell in 1964. YouTub
I know quite a bit about crypto and digital security. If I could find the time to write something, which won't be soon, is there something that would interest LessWrong? (If you just want to read crypto stuff, Matthew Green's blog is good; "how to protect a nascent known-to-be-actually-working GAI from bad guys" will read like "stay the fsck away from any mobile phones and the internet and don't trust your hardware; bring an army", which won't be terribly interesting.)
First time this has happened since the 30day karma score was implemented. Lesswrong addictions are apparently easy to squelch!
I like the Operations Research subreddit. Other people looking for applied rationality might like it, too. This probablistic analysis of problems with federal vanpools is a characteristic example.
Using large scale genetic sequencing has for the first time found the cause of a new illness. Short summary here and full article here. In this situation, an individual had a unique set of symptoms, and by doing a full exome scan for him and his parents they were able to successfully locate the gene that was creating the problem and understand what was going wrong.
Setting up policies to discuss politics without being mind-killed-- I'm linking to this in the early phase because LWers might be interested in following the voluminous discussions on that site to see whether this is possible, and it will be easier to start from the beginning, and also possible to make predictions.
I haven't heard this problem mentioned on here yet: http://www.philosophyetc.net/2011/04/puzzle-of-self-torturer.html
What do you think of the puzzle? Do you think the analysis here is correct?
Luke's comment on just how arse-disabled SIAI was until quite recently (i.e., not to any unusual degree) inspired me to read Nonprofit Kit For Dummies, which inspired me to write a blog post telling everyone to buy it. Contains much of my bloviating on the subject of charities from LessWrong over the past couple of years. Includes extensive quote from Luke's comment.
Does any know of any good online recourses for Bayesain statistics? I'm looking for something fairly basic, but beyond the here's what Bayes theorem is level that Khan academy offers.
I would be interested in setting up an online study group, preferably via google hangout or skype for several key sequences that I want to grok and question more fully. Anyone else interested in this?
I'm hoping to do some reading on music cognition. I've got a pretty busy few months ahead, so I can't say how far I'll get, and I'm not used to reading scientific literature, so it'll be slow going at first I'm sure, but I'd like to get a better grasp of this field.
In the vein of lukeprog's posts on scholarship, does anyone here know anything on this field, or where I might begin to learn about it? I've got access to a library with a few books dealing with the psychology of music and I can get online access to a small few journals. I've also read most of L...
Suppose that, after some hard work, EY or someone else proves that a provably-friendly AGI is impossible (in principle, or due to it being many orders of magnitude harder than what can reasonably be achieved, or because a spurious UFAI is created along the way with near certainty, or for some other reason).
What would be a reasonable backup plan?
This play in NYC looks pretty sweet. It looks like it touches on some concepts like Godshatter, idea from Three Worlds Collide, and a healthy understanding of the idea that technology could make us very very different from who we are now.
...While exploring many of the common ideas that come attendant with our fascination with A.I., from Borglike interfaced brains to 2001-esque god complexes, DEINDE is particularly focused on two aspects: how to return to being "normal" after experiencing superhuman intelligence, and how, or if we should, return fr
To give potentially interested parties a greater chance of learning about Light Table, I'm reposting about it here:
"I know there are many programmers on LW, and thought they might appreciate word of the following Kickstarter project. I don't code myself, but from my understanding it's like Scrivener for programmers:
http://www.kickstarter.com/projects/ibdknox/light-table?ref=discover_pop"
I seem to remember someone posting on Less Wrong about software that locks your computer to only doing certain tasks for a given period (to fight web-surfing will-power failures, I guess). After some cursory digging on the site, I couldn't find it. Does anybody remember the thread were this kind of self-binding software was discussed or at least the name of some brand of this software?
(Ideally I would like to read the thread first, and get a sense of how well this works.)
How old are you?
I'm 41. I'm curious what the age distribution is in the LW community, having been to one RL meetup and finding I was the oldest one there. (I suspect I was about 1.8 times the median age.)
I love what the LW community stands for, and age isn't a big deal... youthful passion is great (trying to hold onto mine!) and I suspect there isn't a particularly strong correlation between age and rationality, but life experience can be valuable in these discussions. In particular, having done more dumb things and believed more irrational things, and gotten over them.
Iodine post up: http://www.gwern.net/Nootropics#iodine
I've been working on this off and on for months. I think it's one of my better entries on that page, and I imagine some of the citations there will greatly interest LWers - eg. not just the general IQ impacts, but that iodization causes voters to vote more liberally.
I also include subsections for a meta-analysis to estimate effect size, a power analysis using said effect size as guidance to designing any iodine experiment, and a section on value of information, tying all the information together.
My gene...
Ever since getting an apartment of my own I've found that, well, I spend more time alone than I used to. Rather than try to take every possible action to ensure that I'm alone as little as possible (which is desperate some of the time and silly a lot of the time) I want to try to learn to like being alone.
So what are some reasons to enjoy spending time alone as opposed to spending it with other people? Or other suggestions about how to self-modify in this way?
I'm looking for a book recommendation on anthropology. I have almost no prior knowledge of the field. I'm after something roughly equivalent to what The Moral Animal was for evolutionary psychology: from-the-ground-up stuff that works by itself and doesn't assume assume significant background knowledge or further reading for a payoff. An easily accessible pop-writing approach à la The Moral Animal is a must-have; I can't read academic textbooks.
I'm reading Ursula Vernon's Digger (nominated for the Graphic Novel Hugo), and it's very much in the spirit of extrapolating logically from odd premises. Digger (a wombat) is sensible and pragmatic and known to complain about how irresponsible Dwarves are for using magic to shore up their mines.
My major (field of study) in college/university is most likely going to be philosophy. I'm an avid reader of this blog, and as such have internalized many LW concepts and terminology, particularly relating to philosophy. In short, should I cite this site if I make use of a LW concept - learnt several years ago on here - in a paper for a philosophy class? If yes (and I'm leaning towards yes), how?
In general, if one internalizes a blog-specific idea off of the Internet and then, perhaps unintentionally, includes it in a somewhat unrelated undergraduate pape...
An excellent debate between SIAI donor Peter Thiel and George Gilder on:
"The Prospects for Technology and Economic Growth"
I suggest skipping the first 8 minutes since they are mostly intro fluff. Thiel makes a convincing case that we are living in a time of technological slowdown. His argument has been discussed on LessWrong before.
There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one's predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident...
Question about anti-akrasia measures and precommitments to yourself.
Suppose you need to do action X to achieve the most utility, but it's somewhat unpleasant. To incentivize yourself, you precommit to give yourself reward Y if and only if you do action X. You then complete action X. But now reward Y has become somewhat inconvenient to obtain.
Should you make the effort to obtain reward Y, in order to make sure your precommitments are still credible?
It's not very related to LW or rationality (although in technical terms it touches on Pascal's Mugging), but I want to post this underrated "creepypasta" anyway; it's one of my favourites and I remembered it after flipping through that hippie blog that Will linked me to:
...On his way home that night, as he walked through town, a man stepped out of an alley in front of him. He tensed to defend himself, but the man just stood there. Looking him over, he realized the man looked like a hippie. Something of a comedy caricature of a hippie, really. Long
If it's worth saying, but not worth its own post, even in Discussion, it goes here.