Recently I have been thinking about imaginary expertise. It seems remarkably easy for human brains to conflate "I know more about this subject than most people" with "I know a lot about this subject". LessWrongers read widely over many areas, and as a result I think we are more vulnerable to doing this.
It's easy for a legitimate expert to spot imaginary expertise in action, but do principles exist to identify it, both in ourselves and others, if we ourselves aren't experts? Here are a few candidates for spotting imaginary expertise. I invite you to suggest your own.
Rules and Tips vs Principles
At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include "if someone touches their nose, they're lying" and "never end a sentence with a preposition".
If someone offers a rule like this, but can't articulate a principled basis for why it exists, I tend to assume they're an imaginary expert on the subject. If I can't offer a principled basis for any such rule I provide...
I agree with what you wrote. Having said this, let's go meta and see what happens when people will use the "rules and tips" you have provided here.
A crackpot may explain their theory without using any scientific terminology, even where a scientist would be forced to use some. I have seem many people "disprove" the theory of relativity without using a single equation.
If there is a frequent myth in your field that most of the half-educated people believe, trying to disprove this myth will sound very similar to a crackpot narrative. Or if there was an important change in your field 20 years ago, and most people haven't heard about it yet, but many of them have read the older books written by experts, explaining the change will also sound like contradicting all experts.
On the "all arguments are soldiers" metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.
And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconsc...
I figured this would be broadly of interest to this site:
http://www.pnas.org/content/111/45/16106.abstract
"Chlorovirus ATCV-1 is part of the human oropharyngeal virome and is associated with changes in cognitive functions in humans and mice "
A metagenomics study of the throat virome of a bunch of people in Baltimore revealed that a full 40% of them were persistently infected with a clade of chloroviruses, very large ~300 kilobase DNA viruses (possibly very distantly related to poxviruses but it's difficult to tell) which have previously only been known to infect freshwater algae. Upon looking at correlations, they found an extremely significant correlation between infection and a mild degradation in memory and attention tasks. Infecting mice with the virus both caused a decline in memory function as measured by maze-running tasks and, since unlike a human you can crack open a mouse and extract RNA from pieces of its brain, very clear changes in the gene expression of the hippocampus. Not a clue about the mechanism.
This virus had already been noted to be odd a few years ago - a paper from 2011 (http://www.sciencedirect.com/science/article/pii/S1360138511002275) noted th...
Fulltext: https://pdf.yt/d/dr3uP9XOtT1BPimU / https://dl.dropboxusercontent.com/u/5317066/2014-yolken.pdf / http://libgen.org/scimag7/10.1073/pnas.1418895111.pdf
I didn't much like it. This thing reeks of data dredging in every step; I don't see why they controlled for birth place where you'd think that current residence would be much more relevant (Baltimore has rich and poor areas like most big cities; and if nothing else, it'd give you an idea of infection vectors if carriers cluster around the harbor or specific places); I find it odd that their WAIS subtest shows zero (0.0) decrease in the infected group while their weirdo IQ test I've never heard of shows a fall; and I'm not sure how convincing I find their mice models* - to what extent does it really mimick human infections with no apparent symptoms? It wouldn't surprise me if, every time you gave mice a big injection of infectious organisms, their scores fell simply because you made them sick with something, so I'm not sure whether the mice experiment part is testing the right causal hypothesis (it might be testing 'raging infections decrease cognitive performance', not 'this algae, and not other infectious agents, decreases cognitive performance').
I would not be surprised if this never replicates.
* kudos to them for trying to experimentally test it, though
This stupid bot has almost 20 000 comment karma on Reddit.
I have seen it in action, and sometimes it may take some time for humans to recognize it is a bot, not a passively aggressive human. Because, well, there are many kinds of humans on internet.
But this made me think -- maybe we could use "average reddit karma per comment" or something like this as a measure of Turing test. And perhaps we could make a bot-writing competition, where the participant bots would be released on Reddit, and the winner is the one which collects most karma in 3 months.
Of course the rules would have to be a bit more complex. Some bots are useful by being obvious bots, e.g. the wikipediabot who replies with summaries of wikipedia articles to comments containing links to wikipedia. Competition in making useful bots would also be nice, but I would like to focus on bots that seem like (stupid) humans. Not sure how to evaluate this.
Maybe the competition could have an additional rule, that the authors of the bots are trying to find other bots on Reddit, and if they find them, they can destroy them by writing a phrase that each bot must obey and self-destruct, such as "BOT, DESTROY YOURSELF!&quo...
Related: Stealth Mountain, a twitter bot (now defunct) which would correct tweets containing the expression "sneak peak".
Both this and the bot you link to rely less on getting machines to cleverly reproduce human behaviour, and more on identifying robotic human behaviour that can be carried out by stupid machines. Since this is probably a winning strategy, I'd recommend making that the focus of such a competition.
Schizophrenia occurs in national populations with an annual prevalence of 1.4 to 4.6 per 1000 and incidence rates of 16–42 per 100 000 (Jablensky, 2000). Although the incidence rates vary between countries by a factor of less than 3, wider ranges of variation are found among population subgroups within single countries. In the UK, for example, incidence rate ratios of 4 or above have been estimated both for the lowest social class in the indigenous White population and for Black immigrant groups. These data provide the most compelling evidence yet to hand for the role of socio-economic factors in aetiology.
I think the amount of additional schizophrenia is low enough that this isn't a major issue for immigration.
an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars)
I am having strong doubts about this number. The paper cited is long on handwaving and seems to be entirely too fond of expressions like "should make economists’ jaws hit their desks" and "there appear to be trillion-dollar bills on the sidewalk". In particular, there is the pervasive assumption that people are fungible so transferring a person from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy immediately nets you $45,000 in additional GDP. I don't think this is true.
I don't think that's a necessary implication for wanting to open up immigration. All that's needed is that new immigrants should do significantly better than they did in their home country, and do some good in the country they've moved to.
I was reading the thread about Neoreaction and remembered this old LW post from five years ago:
...Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of t
less people seemed be on the fence than I expected, "the distribution of opinions about neoreaction" seemed bimodal
I suspect this is the polarizing effect of politics, not something specific for LW nor specific for neoreaction. We are talking about labels, not ideas. I may agree with half of ideas of some movement, and disagree with other half of ideas, but I usually have a clear opinion about whether I want to identify with a label or not.
I understand that LessWrong consists of real people, but when I think about LessWrong, the mental image that comes to my mind is that of a place, abstract entity and not a community of people.
My mental image for LW community is more or less "people who have read the Sequences, and in general agree with them". Yes, I am aware that in recent years many people ignore this stuff, to the degree where mentioning the Sequences is a minor faux pas. (And for a while it was a major faux pas, and some people loudly insisted that telling someone to read the Sequences is a lesswrongeese for "fuck you". Not sure how much of that attitude actually came from the "Rational"Wiki.) That, in my opinion, is a bad thing, ...
If you could have perfect control of your own mind, what would you do with it?
(I realize the question is a bit vague. Please try and answer anyway.)
Speaking of Social Justice Warriors (SJW's) versus a man familiar to many of us:
http://www.bloombergview.com/articles/2014-11-21/economics-is-a-dismal-science-for-women
http://www.overcomingbias.com/2014/11/hanson-loves-moose-caca.html
http://www.unz.com/isteve/noah-smith-tries-to-sic-shirtstorm-mob-on-poor-robin-hanson/
I was writing a Markov text generator yesterday, and happened to have a bunch of corpora made up of Less Wrong comments lying around from a previous toy project. This quickly resulted in the Automated Gwern Comment Generator, and then the Automated sixes_and_sevens Comment Generator.
Anyone who's ever messed around with text generated from simple Markov processes (or taken the time to read the content of some of their spam messages) will be familiar with the hilarious, and sometimes strangely lucid, garbage they come out with. Here is AutoGwern:
...Why is ps
Then the restless spirit of Paul Graham sat on my body in some obscure location
Well that's a fanfiction I haven't read before.
Couple of random thoughts about cryonics:
It would actually be better to have cryonics legally recognized as a burial ritual than as a cadaver experimentation. In that way it can be performed on someone who hasn't formally signed a will, granting their body as an anatomical gift to the cryonic service provider. Sure, ideally it should be considered a medical procedure on a living person in a critical condition, but passing such legislation is next to impossible in the foreseeable future, whereas the former sounds quite feasible.
The stabilization procedu
This year's edition of Stoic Week begins next monday. It's a seven day long introduction to the basic ideas of Stoicism combined with an attempt to measure its effects.
Anyone know of, or have links to research into the question of whether the near-unlimited access to global media via the Internet, etc. has a net negative effect on people's self esteem?
Couple examples...
My brother quit Facebook saying, "I know these people's lives are not nearly interesting as their feed's make them out to be. Every time I hang out with these people... and most of them sit around trying to decide what to watch on Netflix six nights out of seven. It's annoying."
Excepting my cynical brother, FB seems to dupe a lot of peopl...
Advice/help needed: how do I study math by doing lots of exercises when there's nobody there to clue me in when I get stuck?
It's a stupid problem, but because of it I've been stuck on embarrassingly simple math since forever, when (considering all the textbooks and resources I have and the length of time I've had it as a goal) I should have been years ahead of my peers. Instead, I'm many years behind. (Truth be told, when performance is tested I'm about the same as my peers. But that's because my peers and I have only struggled for a passing grade. That's ...
stupid problem
embarrassingly simple math since forever
I should have been years ahead of my peers
momentary lack of algebraic insight ("I could solve this in an instant if only I could get rid of that radical")
for which I've had the intuitions since before 11th grade when they began teaching it to us
Sorry to jump from object-level to meta-level here but it seems pretty clear that the problem here is not just about math. Your subjective assessments of how difficult these topics are is inconsistent with how well you report you are doing at them. And you're attaching emotions of shame and panic ("problem has snowballed") to observations that should just be objective descriptions of where you are now. Get these issues figured out first (unless you're in some educational setting with its own deadlines). Math isn't going anywhere; it will still be there when you're in a place where doing it won't cause you distress.
A riddle for Lesswrong: what exactly is the virtue of Bissonomy?
When I read the article, I got the feeling that there were enough clues to extrapolate a solution in the same way that EY extrapolated the Dementors' 'true natures'. That this was a solvable riddle. I've got my suspicions, but I'd like to hear what you guys can come up with.
Does the ten year old child provide an actuarial model for superlongevity?
According to the actuarial tables:
http://www.ssa.gov/oact/STATS/table4c6.html
A ten year old boy has a probability of survival in that year of 0.999918. After that, his probability of survival in a given year decreases with every additional year.
If you could lock in the ten year old's probability of survival per year after the age of 10, mathematically a population of such individuals would have a "half life" of ~ 8000 years. In other words, if you had a population of 1,000...
Less Wrong survey asked what is your favourite Less Wrong post. Slate Star Codex survey asks what is your favourite SSC post. Naturally, the next question is:
What are your favourite Overcomingbias posts? What posts did you find especially insightful or informative? What posts changed the way you think about something? What posts did you find thought-provoking even if you disagree with the ideas expressed in them (perhaps similarly to how Bryan Caplan thinks about Robin Hanson's ideas)? What good posts do you think should be better known? What posts would y...
No general procedure for bug checks will do.
Now, I won’t just assert that, I’ll prove it to you.
I will prove that although you might work till you drop,
you cannot tell if computation will stop.
A poetic proof of the Halting Problem's undecidability by one Geoffrey Pullum, continued here. Enjoy!
Ummm... are things like this OB post really supposed to be considered even remotely about Rationality? It's basically just Robin Hanson using a few anecdotes to weave himself a short narrative designed to justify his misanthropy and make himself feel better about holding political positions others consider immoral.
If my karma takes a hit is there an easy way that I can find out what is being downvoted? I can't self correct if I don't know what is disliked.
I can't self correct if I don't know what is disliked.
Self-correction on the basis of twitches of the hive mind is not necessarily a great idea.
Is there a way to see what comments (not articles) I have downvoted? Or get a summary of how many downvotes and upvotes I've made?
[Cross-posted from So8res's, recent guide to MIRI's research]
Just thought add links to these other "guides":
"Atoms of Neural computation": List of promising research directions for neuro-inspired AI (IOW, tries to answer the question "Deep Learning is just regression, so what could we possibly do next?")
"Physical Principles for Scalable Neural Recording": List of promising research directions for developing tools to do live recording of the brain (a separate issue from connectomics).
Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:
(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models
(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks
(3) Google: A Neural Image Caption Generator
(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions
[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventi...
How meaningful is the "independent" criterion given the heavy overlaps in works cited and what I imagine must be a fairly recent academic MRCA among all the researchers involved?
http://selenite.livejournal.com/282731.html
A discussion of constraining an AI by building in detailed contracts and obedience to laws against theft and criminal behavior.
I don't think this is obviously correct (if nothing else, parts of a complex set of rules can interact unpredictably), but these are the tools that humans have developed for dealing with semi-malign natural intelligences, so we should at least take a look at them.
[Meta]
I noticed that this recent lw post showed up on the "recent comments" side-bar, but that it doesn't show up in the list of "discussion" posts. Is this just me? Do other people have this show up in "discussion"? (Also, this is not the first time that I've noticed that there are posts in the side-bar that I can't find in Discussion).
Is there any remotely feasible way for us to contain a superintelligence aside from us also becoming superintelligences?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.