I suggest having a link to the last open thread in each open thread, and similar for the quotes thread. That way, you can just follow the link to find out what people posted near the end of the last thread, so it doesn't become pointless if there's going to be a new thread soon.
Besides LW, what are some other online communities with very high signal/noise ratios?
Some good indicators: Lots of original content, meaningful and well-presented information, respectful conversations with a high level of discourse, low levels of trolling/strong community norms, very strong domain-specific knowledge, people know each other by username.
Another heuristic: somewhere where you would not want to share the link with a lot of people lest the quality be diluted with newcomers. (Hopefully you consider LW a strong enough pool to draw from).
Examples I can think of:
The Straight Dope Message Boards is a great general topic forum with low rates of trolling.
The Oil Drum, is a strong web community specific to the Energy industry. Stopped updated this past September (thanks knb, I hadn't visited for a couple months).
Niche subreddits are often a great resource, so much so that when I'm looking for information I often do a reddit search before looking over the greater internet.
Things that wouldn't count:
Social news sites, like hacker news or the the large subreddits, which tend to have a lot of noise.
Gwern's Google+ feed has perhaps the single highest signal/noise r
Let's say I wanted to monitor the variation in my Big 5 personality traits over time. Is there an existing way to do this, or should I handroll my own procedure, which will probably be total BS?
What's scientifically known about hangovers
Debunks the common notion that hangovers are about dehydration. The reason it caught my eye is that I believed the dehydration theory, even though I should have known that extreme sensitivity to sound isn't a normal symptom of dehydration. (I've never had a hangover, but at popular accounts include sensitivity to sound and light.)
I'm wondering how I can become skeptical enough.
One reason for the myth about dehydration would be due to "drinking plenty of water" still being one of the most effective things to do: If it's about the liver breaking down alcohol into toxic Acetaldehyde, drinking lots of water to flush it out.
Understandable mistake to go from "more water fixes the problem" to "problem must've been not enough water (dehydration.)"
This was how it was discussed in my university chemistry class. Also mentioned: a similar breakdown (same enzymes or whatnot) happens with methanol, and the breakdown products (formaldehyde and then methanoic acid) are stronger / more toxic than those of ethanol (acetaldehyde / acetic acid.)
Which raises the question, if the things people say about "hangovers" are true about the things they apply the term "hangover" to, what's left to be debunked?
I have been interested in the phenomenon called tulpa. (interestingly, Wikipedia sheds next to no light on this issue).
According to one site, it is an "autosuggested and stable visualization, capable of independent thought and action, while possessing its own unique consciousness". Supposedly, following the guides found on the internet, one can create a stable, persistent "imaginary friend", with the looks and character one wants that will be real in all aspects for its creator. Some say that tulpa can provide an alternate viewpoint or help fetch information from their host's memory, but various hosts disagree on the possibility of this.
Looks like tulpa in modern, Western definition has no connection to its Buddhist namesake (like karma on the forums). Some enthusiasts claim otherwise, but, as seems to be characteristic of this topic, there's no evidence.
All I could find are guides and diaries of anonymous people on the Internet. It seems like the whole phenomenon, if it really exists, was invented some 1.5 years ago by some Anonymous: there's their own slang, and absolutely no sources that connect the methods to any actual scientific research.
I suspect that the...
There should really be full discussion post about this, since it keeps getting brought up.
EDIT: So I made one. If there isn't interest, well, at least it's spurred me to consolidate a bibliography
In addition to what I wrote in the other thread:
Luhrmann wrote a book, When God Talks Back, about her experiences with evangelicals, which might be useful. She also succeeded in inducing tulpa-like visions of Leland Stanford, jr. in experimental subjects.
The tulpa community also seems to have a fondness for amateur psychological research, although I imagine there'll be a lot of chaff and unfinished projects in there.
Someone who works at google told me the company is working on trolley problems because self-driving cars may have to make that sort of decision, and google will be responsible.
Dumb reinforcement question: How do I reward the successful partial-completion of an open ended task without reinforcing myself for quitting?
Basically I'm picking up the practice of using chocolates as reinforcement. I reward myself when I start and when I finish. This normally works very well. Start doing dishes -> chocolate -> do dishes -> finish doing dishes -> chocolate. It seems viable for anything with discrete end states.
Problem - I've got a couple long term tasks (fiction writing and computer program I'm making) that don't have markers, and I can put anywhere from 30 minutes to 3 days into them without necessarily seeing a stopping point. I'm worried that rewarding chocolates whenever I get up from working will (in the long run) reinforce me to quit more frequently. I don't want to end up with a hummingbird work ethic for these tasks.
How should I reinforce to maximize my time-on-task?
(So far my best plan is to write a smartphone app that creates a hidden random timer between 5-55 minutes (bell curve) that goes off, and I reward myself chocolate if I'm on task when the alarm activates. But there's logistical hurtles and it seems like quite a bit of work for something that might be solved easily otherwise. Plus, I don't know what possible bad behavior that might incitivize.)
Why does it need to be a hidden random timer? Reward yourself if you stayed on task for the past 30 minutes. (Hmm, I think we've just reinvented the Pomodoro Technique.)
Incidentally, have you (or others who use schemes like this) considered using intermittent reinforcement? Like, instead of just rewarding yourself upon meeting the victory condition, you flip a coin to see if you get the reward. It seems the obvious thing to do if you're going for the whole inner pigeon thing.
Question about a low-level social thing:
I've noticed that I have low priority in at mid-large group conversations. What I mean is that in situations where I'm one of two people talking, I'm (generally) the one who stops and the attention of the "audience" (people-who-aren't-speaking) is predominantly on the other person even before I stop speaking.
This used to cause me considerable distress, but no longer. I've accepted it as a fact of the social universe. But I'm still curious and would like to change it, if possible.
I suspect that this is something that varies by social group, and more strongly suspect that some behavior of mine is key.
I'm interested in (being pointed to) discussion of this type of thing, especially if it contains actionable advice.
Cosmologist and science popularizer Sean Carroll debates Christian apologist William Lane Craig on Feb 28. The topic is God and Cosmology. My prediction: while Sean Carroll is very good, I don't expect him to beat a professional (and a very successful) debater.
I need some advice on spaced repetition software.
I teach high school English to underclassmen who skew towards "totally unmotivated". I have been using spaced repetition principles for years (using games, puzzles, and other spaced reviews) to help with vocabulary and terminology. These do effectively engage many of the poorly motivated.
But recently, I feel like smartphones have become ubiquitous enough among students that I'm looking for software I could use as a quasi-official SRS companion app with my students. I think many of them would use it, but only if they experience very minimal frustration setting it up and running it. My wishlist:
(1) Free app on both Android and iPhone (I'd say it's about 50/50 with my students) (2) Companion web app with cloud sync to mobile apps. (3) Very easy to use and update with new cards regularly. I would like to be able to post weekly deck additions on my teacher web page that students can add to their deck.
Anki, which I use for my personal learning, seems to come closest -- but the $25 cost of the iPhone app is a problem, and I worry that using the web app on the iPhone be too much of a hassle. I also worry that the "add external cards to your deck" procedure is a bit too hairy as well.
Has anyone seen anything that comes closer to my needs than Anki? Thanks!
Look into memrise.
It has an app, it has a lot of the bells and whistles that Anki lacks (like a scoring/gamification system) that could be helpful with the population you are teaching, and it is all around a solid SRS system. The only thing I think it lacks are those Easy/Good/Hard buttons that Anki has to differentiate between how well you know the answer, but that's something I can live without. I use both it and Anki on a day to day basis.
I've started making heavy use of archive.is. You give them a link, or click their super-handy bookmarklet, and that page will be archived. I use it whenever I'm going to be saving a link, now, to ensure that there will be a copy if I go looking for it years later (archive.org is often missing things, as I'm sure we've all run in to).
Aging doesn't necessarily resemble the human pattern
Today in Nature, evolutionary biologist Owen Jones and his colleagues have published a first-of-its-kind comparison of the aging patterns of humans and 45 other species. For folks (myself included) who tend to have a people-centric view of biology, the paper is a crazy, fun ride. Sure, some species are like us, with fertility waning and mortality skyrocketing over time. But lots of species show different patterns — bizarrely different. Some organisms are the opposite of humans, becoming more likely to reproduce and less likely to die with each passing year. Others show a spike in both fertility and mortality in old age. Still others show no change in fertility or mortality over their entire lifespan.
This write-up: http://michael.richter.name/blogs/why-i-no-longer-contribute-to-stackoverflow/
And also one of the main issues he discusses:
http://en.wikipedia.org/wiki/Iron_law_of_oligarchy
Seem to be relevant to LessWrong as well, to some degree. How can we avoid the problem of 'Creeping Authoritarianism'?
Is the Iron law of oligarchy essentially a Goodhart's Law applied to humans? Like: You want a group of humans to accomplish something useful, so you create a system to resolve conflicts, e.g. a democratic majority vote. Sooner or later people learn how to win the majority vote by optimizing for winning the majority vote, without accomplishing much of what you originally wanted them to do. -- And if you try to fix this by adding some safety mechanism X to the democratic vote, then people will simply optimize for the majority vote plus X. For example in addition to elected politicians known to optimize for popularity, you add unelected bureaucrats who are supposed to be the experts, but somehow those just entrench themselves in the bureaucratic system regardless of their level of expertise.
If so, then essentially there is no safe way to solve this. If we measure something, then Goodhart's Law attacks. If we don't measure something, then... well, just because you are not looking at something, it doesn't mean it's not there... in the absence of explicit rules, the implicit rules will decide; the most popular people will simply be the most popular people.
All we can do is to use are some...
This very short xmas story was mentioned by Eliezer on Facebook: https://www.fanfiction.net/s/9915682/1/The-Last-Christmas. I wonder what the ending means (speculations and spoilers welcome, rot13, if you like).
Sorry if stupid question. Let's assume that the universe (mathematical multiverse?) gives us observations sampled from some simplicity-based distribution, like the universal distribution in UDASSA. Can that explain the initial low entropy of our universe (fewer bits to specify), and also the fact that we're not in a tiny ordered bubble surrounded by chaos?
ETA: I see Rolf Nelson made the same point in 2007. This just makes me more puzzled why Eliezer insists on using causality, given that the causal arrow of time comes from initial low entropy of the univer...
Can anyone suggest visual symbols for Reason? I need one for a project I'm working on.
Specifically, I'm looking for something that could represent the concept of Reason, but isn't associated with any modern politics, and doesn't rely on an understanding of modern science. i.e. the first thing that came to mind was a sketch of an atom, but that won't suit. It should be recognizeable to a pre-industrial scientist or philosopher.
On googling, the best fit I've found so far was a lit candle; Reason as a light in the darkness. That should give an idea of where my head's at.
I donated a small amount of money to CFAR and Archive.org when there were matching campaigns because both organisations accept bitcoin as a payment option. I liked the feeling but as a student it is quite an expensive experience though I will do it again in the future. One thing I'd like though is to be notified about is when my favorite charities have matching campaigns going on so that my contribution goes a longer way. Is there a way to get emails on those occasions without being spammed every other day about stuff I really do not care about?
A new paper gives a much better algorithm for approximating max flow in undirected graphs. Paper is here. Article for general readers is here. Although the new algorithm is asymptotically better, it remains to be seen if it is substantially better in the practical range. However, this is an example of discovering a substantially more efficient algorithm where one might not have guessed that substantial improvements were possible.
Is anyone out there irritated with the concept of anti-fragility?
It seems to me to unnecessarily merge three different concepts: evolution by selection, learning, going out from local optimum by means of random stimulations.
The claim that there are systems that benefit from disorder, exactly as stated, to me has only one instance, not even a real thing: Project Orion.
I'd like to be able to watch posts. Currently, I can see if someone replies to a post of mine, but not any other post. Sometimes there are posts where someone asks an interesting question, but nobody has answered it, and that person isn't me. There's no way for me to tell when someone replies to it so I can read the reply.
Every post and every comment has an envelope icon in the lower right, just to the right of the permalink chain icon, just to the right of the speech bubble reply icon. Click on the envelope and it will be highlighted; click again to go back to gray. While highlighted, replies will show up in your inbox. It is only for immediate replies, not replies to replies.
I have recently seen this article on a drug, which is as yet not approved by FDA due to lack of controlled study. However, the proponent doctor considers the drug a life saver for babies and the existing evidence sufficient to skip the trial phase and go ahead saving lives.
I am usually fond of evidence based medicine. However, in this case, I am shaken by the fact, that the drug is already in use in Europe for some time, with no scandals. Additionaly, it is basically a mixture of lipids from fish oil, which sounds like normal nutrition to me, not a novel ...
In a recent comment, II expected that my question might have been already answered so I wrote this:
I'm just now seeing this discussion, and don't have time to read earlier posts.
I knew this was arrogant, so I appreciated the humor of this reply:
Maybe you can hire someone to read them for you and prepare an executive summary :)
I wanted to explain here why I did not read the previous posts.
There are roughly three nested reasons:
first, it was an experiment because I am often tempted to write something like this (in fact, I have in less egregious cas
I know there are some R Scott Bakker fans on here, and I was thinking recently about the Second Darkness series. Rot13d for spoilers:
Vg'f n funzr gur pbafhyg ner rivy. Vs gurl jrera'g fb pbzzvggrq gb rivy npgf, gurl pbhyq ratvarre n jnl gb tvir rirelbar n unccl raqvat.
Jr ner gbyq gung fbepreref ner qnzarq, naq gung gur hygvzngr tbny bs gur Vapubebv vf gb erqhpr gur ahzore bs yvivat fbhyf ba gur cynarg gb srjre guna 144,00 va beqre gb frny gur cynarg sebz gur Bhgfvqr naq rfpncr qnzangvba. Gur Pbafhyg pbhyq erpehvg nf znal fbepreref nf cbffvoyr, genva gurz g...
Is the Iron law of oligarchy essentially a Goodhart's Law applied to humans? Like: You want a group of humans to accomplish something useful, so you create a system to resolve conflicts, e.g. a democratic majority vote. Sooner or later people learn how to win the majority vote by optimizing for winning the majority vote, without accomplishing much of what you originally wanted them to do. -- And if you try to fix this by adding some safety mechanism X to the democratic vote, then people will simply optimize for the majority vote plus X. For example in addition to elected politicians known to optimize for popularity, you add unelected bureaucrats who are supposed to be the experts, but somehow those just entrench themselves in the bureaucratic system regardless of their level of expertise.
If so, then essentially there is no safe way to solve this. If we measure something, then Goodhart's Law attacks. If we don't measure something, then... well, just because you are not looking at something, it doesn't mean it's not there... in the absence of explicit rules, the implicit rules will decide; the most popular people will simply be the most popular people.
All we can do is to use are some heuristics, and remember the nameless virtue; i.e. to change or abandon the heuristics when they stop being reasonable. We must keep thinking and updating, and thinking and updating, again and again.
Specifically, I have already noticed the Goodhart's Law in action on StackExchange. Instead of helping other people it's more and more about getting more points than other contributors. For example, you start writing your answer before you even think about it completely, because posting the partial answer and editing it later is better than thinking about it, posting it, and finding that someone else posted their very similar answer 1 minute sooner. So it's "use google, post the first information found, use google more, edit your answer to include the additional information" cycle. And if the question cannot be solved by googling, nominate it for deleting, as off-topic or something. If no one can answer the question fully, then criticize the partial answers of other people as incomplete and downvote them; if you couldn't get points by using your strategy, they don't deserve them either.
Improving measurements is one of the boring but massive levers we have at our disposal, e.g. givewell, the technical details of how voting schemes capture preferences etc.