Less Wrong: Open Thread, September 2010
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (610)
Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.
And now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?
That happened to me three days ago or so after my last top level post. At the time said post was at -6 or so, and my karma was at 60+ something. Then, within a space of < 10 minutes, my karma dropped to zero (actually i think it went substantially negative). So what is interesting to me is the timing.
I refresh or click on links pretty quickly. It felt like my karma dropped by more than 50 points instantly (as if someone had dropped my karma in one hit), rather than someone or a number of people 'tracking me'.
However, I could be mistaken, and I'm not certain I wasn't away from my computer for 10 minutes or something. Is there some way for high karma people to adjust someone's karma? Seems like it would be useful for troll control.
While katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.
It is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point.
In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.
I think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.
The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.
There is at least one post about that - though I don't entirely approve of it.
Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.
Occam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).
Ummmmmmmm.... no.
The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.
How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.
We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.
However, if we are predominately white males, why are we? Should that concern us? There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist. I'm willing to bet that after correcting for socioeconomic correlations with ethnicity, we still don't make par. Perhaps naïvely, I feel like we must explain ourselves if this is the case.
I've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.)
Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in?
I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled.
This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1]
He recommended recruiting Japanese members, since they're more apt to like and trust robots.
I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work.
[1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.
He has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages.
Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.
I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.
According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian.
Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.
All LW users display near-native control of English, which won't be as universal, and typically requires years-long consumption of English content. English-speaking world is the default source of non-Russian content for Russians, but it might not be the case with native Asians (what's your impression?)
My impression is that for most native Asians, the English-speaking world is also their default source of non-native-language content. I have some relatives in China, and to the extent they do consume non-Chinese content, they consume English content. None of them consume enough of it to obtain near-native control of English though.
I'm curious, what kind of English content did you consume before you came across OB/LW? How typical do you think that level of consumption is in Russia?
Unfortunately, browser spell checkers usually can't help you to spell your own name correctly. ;) That is one advantage to my choice of nym.
I don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer)
I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women.
Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majority demographics still have a more overall rationality friendly climate (due to the caprice of history) than basically any part of the world? I freely admit my own native culture (though I'm probably thoroughly Westernised by now due to late childhood influences of mass media and education) is probably less rational than the Anglosaxon one. However simply going on a "crusade" to make other cultures more rational first since they are "clearly" more in need is besides sending terribly bad signals as well as the potential for self-delusion perhaps a bad idea for humanitarian reasons.
Sex ration: There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male for the foreseeable future (until human modification takes off). This obviously affects our potential pool of recruits.
Age: People grow more conservative as they age, Lesswrong is firstly available only on a relatively a new medium, secondly has a novel approach to popularizing rationality. Also as people age the mind unfortunately do deteriorate. Very few people have a IQ high enough to master difficult fields before they are 15, and even their interests are somewhat affected by their peers.
I am sure I am rationalizing at least a few of these points, however I need to ask you is pursuing some popular concept of diversity (why did you for example not commend LW on its inclusion of non-neurotypicals who are often excluded in some segments of society? Also why do you only bemoan the under-representation of groups everyone else does? Is this really a rational approach? Why don't we go study where the in the memspace we might find truly valuable perspectives and focus on those? Maybe they overlap with the popular kinds, maybe they don't, but can we really trust popular culture and especially the standard political discourse on this? ) is truly cost-effective at this point?
If you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people.
If less than 4% of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group?
I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.
That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.
This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes".
Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.
I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.
Given new evidence from the ongoing discussion I retract my earlier concession. I have the impression that the bottom line preceded the reasoning.
I expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself.
That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you.
Edit: fixed typo
Mathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one.
I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here.
I'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent.
I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?
Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.
That was the point of making the demand.
You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.
It might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?
I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math.
But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.
Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?
I agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task.
I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.
Is mathematical ability a bell curve?
My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women.
Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.
I have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution.
I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.
Absolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.
I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!
And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.
It doesn't. (Unfortunately.)
Am I misunderstanding, or are you claiming that motivation is purely an inherited trait? I can't possibly agree with that, and I think even simple experiments are enough to disprove that claim.
Misunderstanding. Expanding the context slightly:
It doesn't. (Unfortunately.)
When it comes to motivation the differences between people are not trivial. When it comes the particular instance of difference between the sexes there are powerful differences in motivating influences. Most human motives are related to sexual signalling and gaining social status. The optimal actions to achieve these goals is significantly different for males and females, which is reflected in which things are the most motivating. It most definitely should not be assumed that motivational differences are purely cultural - and it would be astonishing if they were.
But if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system.
I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence.
Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims?
I would venture to state that this case is especially strong for preferences.
And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it?
Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.
We can't directly measure them, but we can get an idea of how large they are and how they work.
For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men.
The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire.
And then there's priming. Asian American women do better on math tests when primed with their race (by filling in a "race" bubble at the top of the test) than when primed with their gender (by filling in a "sex" bubble). More subtly, priming affects people's implicit attitudes towards gender-stereotyped domains too. People are often primed about their gender in real life, each time affecting their actions a little, which over time will add up to significant differences in the paths they choose in life in addition to that which is caused by innate gender differences. Right now we don't have enough information to say how much is caused by each, but I don't see why we can't make more headway into this in the future.
How do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field?
Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more).
Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate.
My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong.
This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression.
But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the value of life and existence (thought these have been questioned and debated here too) or perhaps some utilitarian principles.
But how do we know any position people take would really reflect their values and wouldn't jut be status signalling? Heck many people who profess their values include or don't include a certain inherent "goodness" to existence probably do for signalling reasons and would quickly change their minds in a different situation!
Not even mentioning the general effect of the mindkiller.
But like I have stated before, there are certainly many spaces where we can optimize the stated goal by outreach. This is why I think this debate should continue but with a slightly different spirit. More in line with, to paraphrase you:
I'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority.
I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority.
Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing.
Example: Discussing the repression of the Mayan minority in Mexico.
While other times we can't do this.
Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania.
Ah, apologies I see I carried it over from here:
You explicitly state later that you are particularly interested in this axis of diversity
Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)?
Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders?
While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years has been disproportionately seen in Europe and later its colonies. I would argue its adoption was a part of the reason for the later (lets say last 300 years) technological superiority of the West.
Edit: I wrote up quite a long wall of text. I'm just going to split it into a few posts as to make it more readable as well as give me a better sense of what is getting up or downvoted based on its merit or lack of there of.
You may want to check the survey results.
Thank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.
Thank you; that was one of the things I'd come to this thread to ask about.
I generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people.
My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.
We have already had quite a lot of that.
OMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;)
But thx for the link.
It does have about 100 pages of comments. Consider also the "links to followup posts" in line 4 of that article. It all seemed to go on forever - but maybe that was just me.
Ok. Well, it is on my reading list now. Again, thx.
I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.
It might matter whether or not one counts closeted gays. Either way, I was just throwing another potential partition into the argument. I also doubt that we differ significantly in our proportion of deaf people, but the point is that being deaf is qualitatively different, but shouldn't impair one's rational capabilities. Same for being female, black, or most of the groups that we think of as adding to diversity.
To little memetic diversity is clearly a bad thing, for the same reason too little genetic variability. However how much and what kind are optimal depends on the environment.
Also have you considered the possibility that diversity for you is not a means to an end but a value in itself? In that case unless it conflicts with more any other values you would perhaps consider more important values you don't need any justification for it. I'm quite honest with myself that I hope that post-singularity the universe will not be paperclipped by only things I and people like me (or humans in general for that matter) value. I value a diverse universe.
Edit:
I.. uhm...see. At first I was very confused by all the far reaching implications of this however thanks to keeping a few things in mind, I'm just going to ascribe this to you being from a different cultural background than me.
Diversity is a value for me, but I'd like to believe that is more than simply an aesthetic value. Of course, if wishes were horses we'd all be eating steak.
Memetic diversity is one of the non-aesthetic arguments I can imagine, and my question is partially related to that. Genetic diversity is superfluous past a certain point, so it seems reasonable that the same might be true of memetic diversity. Where is that point relative to where Less Wrong sits?
Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?
"Ought"? I say it 'ought' to be explained away be the subject matter of less wrong if and only if that is an accurate explanation. Truth isn't normative.
Is this a language issue? Am I using "ought" incorrectly? I'm claiming that the truth of the matter is that women are capable of rationality, and have a place here, so it would be wrong (in both an absolute and a moral sense) to claim that their lack of presence is due to this being a blog about rationality.
Perhaps I should weaken my statement to say "if women are as capable as men in rationality, their underrepresentation here ought not be explained away by the subject matter". I'm not sure whether I feel like I should or shouldn't apologize for taking the premise of that sentence as a given, but I did, hence my statement.
Ahh, ok. That seems reasonable. I had got the impression that you had taken the premise for granted primarily because it would be objectionable if it was not true and the fact of the matter was an afterthought. Probably because that's the kind of reasoning I usually see from other people of your species.
I'm not going to comment either way about the premise except to say that it is inclination and not capability that is relevant here.
"Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?"
To get back to basics for a moment: we don't know that women and black people are underrepresented here. Usernames are anonymous. Even if we suspect they're underrepresented, we don't know by how much -- or whether they're underrepresented compared to the internet in general, or the geek cluster, or what.
Even assuming you want more demographic diversity on LW, it's not at all clear that the best way to get it is by doing something differently on LW itself.
Well I will try to elaborate.
After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values). I'm not saying you don't (I can't know this) or that you should. I at first assumed you thought the way you do because you came up with a system more or less similar to my own, a incredibly unlikely event, that is why I scolded myself for employing the mind projection fallacy while providing a link pointing that this particular component is firmly integrated into the whole "stuff White people like" (for lack of a better word) culture that exists in the West so anyone I encounter online with whom I share the desire for certain spaces of diversity is on average overwhelmingly more likely to get it from that memplex.
Also while I'm certainly sympathetic about hoping one's values are practical, but one needs to learn to live with the possibility one's values are neutral or even impractical or perhaps conflicting with each other. I overall in principle support efforts to lower unnecessary barriers for people to join Lesswrong.But the OP doesn't seem to make it explicit that this is about values, and you wanting other Lesswrongers to live by your values but seems to communicate that its about it being the optimal course of improving rationality.
You haven't done this. Your argument so far has been to simply go from:
"arbitrary designated group/blacks/women are capable of rationality, but are underrepresented on Lesswrong"
to
"Lesswrong needs to divert some (as much as needed?) efforts to correct this."
Why?
Like I said lowering unnecessary barriers (actually you at this point even have to make the case that they exist and that they aren't simply the result of the other factors I described in the post) won't repel the people who already find LW interesting, so it should in principle get a more effective and healthy community.
However what if this should prove to be insufficient? Divert resources to change the preferences of designated under-represented groups? Add elements to Lesswrong that aren't strictly necessary to reach its stated objectives? Which is not to say we don't have them now, however the ones we have now probably cater to the largest potential pool of people predisposed to find LW's goals interesting.
Konkvistador:
There is a fascinating question that I've asked many times in many different venues, and never received anything approaching a coherent answer. Namely, among all the possible criteria for categorizing people, which particular ones are supposed to have moral, political, and ideological relevance? In the Western world nowadays, there exists a near-consensus that when it comes to certain ways of categorizing humans, we should be concerned if significant inequality and lack of political and other representation is correlated with these categories, we should condemn discrimination on the basis of them, and we should value diversity as measured by them. But what exact principle determines which categories should be assigned such value, and which not?
I am sure that a complete and accurate answer to this question would open a floodgate of insight about the modern society. Yet out of all difficult questions I've ever discussed, this seems to be the hardest one to open a rational discussion about; the amount of sanctimoniousness and/or logical incoherence in the answers one typically gets is just staggering. One exception are several discussions I've read on Overcoming Bias, which at least asked the right questions, but unfortunately only scratched the surface in answering them.
My experience is similar. Even people that are usually extremely rational go loopy.
I seem to recall one post there that specifically targeted the issue. But you did ask "what basis should" while Robin was just asserting a controversial is.
wedrifid:
I probably didn't word my above comment very well. I am also asking only for an accurate description of the controversial "is."
The fact is that nearly all people attach great moral importance to these issues, and what I'd like (at least for start) is for them to state the "shoulds" they believe in clearly, comprehensively, and coherently, and to explain the exact principles with which they justify these "shoulds." My above stated questions should be understood in these terms.
If you are sufficiently curious you could make a post here. People will be somewhat motivated to tone down the hysteria given that you will have pre-emptively shunned it.
I've spent some time thinking about this, and my conclusion is that, at least personally, what I value about diversity is the variety of worldviews that it leads to.
This does result in some rather interesting issues, though. For example, one of the major factors in the difference in worldview between dark-skinned Americans and light-skinned Americans is the existence of racism, both overt and institutional. Thus, if I consider diversity to be very valuable, it seems that I should support racism. I don't, though - instead, I consider that the relevant preferences of dark-skinned Americans take precedence over my own preference for diversity. (Similarly, left-handed peoples' preference for non-abusive writing education appropriately took precedence over the cultural preference for everyone to write with their right hands, and left-handedness is, to the best of my knowledge, no longer a significant source of diversity of worldview.)
That assumes coherence in the relevant group's preference, though, which isn't always the case. For example, among people with disabilities, there are two common views that are, given limited resources, significantly conflicting: The view that disabilities should be cured and that people with disabilities should strive to be (or appear to be) as normal as possible, and the view that disabilities should be accepted and that people with disabilities should be free to focus on personal goals rather than being expected to devote a significant amount of effort to mitigating or hiding their disabilities. In such cases, I support the preference that's more like the latter, though I do prefer to leave the option open for people with the first preference to pursue that on a personal level (meaning I'd support the preference 'I'd prefer to have my disability cured', but not 'I'd prefer for my young teen's disability to be treated even though they object', and I'm still thinking about the grey area in the middle where such things as 'I'd prefer for my baby's disability to be cured, given that it won't be able to be cured when they're older if it's not cured now, and given that if it's not cured I'm likely to be obligated to take care of them for the rest of my life' exist).
I think that's coherent, anyway, as far as it goes. I'm sure there are issues I haven't addressed, though.
With your first example, I think you're on to an important politically incorrect truth, namely that the existence of diverse worldviews requires a certain degree of separation, and "diversity" in the sense of every place and institution containing a representative mix of people can exist only if a uniform worldview is imposed on all of them.
Let me illustrate using a mundane and non-ideological example. I once read a story about a neighborhood populated mostly by blue-collar folks with a strong do-it-yourself ethos, many of whom liked to work on their cars in their driveways. At some point, however, the real estate trends led to an increasing number of white collar yuppie types moving in from a nearby fancier neighborhood, for whom this was a ghastly and disreputable sight. Eventually, they managed to pass a local ordinance banning mechanical work in front yards, to the great chagrin of the older residents.
Therefore, when these two sorts of people lived in separate places, there was on the whole a diversity of worldview with regards to this particular issue, but when they got mixed together, this led to a conflict situation that could only end up with one or another view being imposed on everyone. And since people's worldviews manifest themselves in all kinds of ways that necessarily create conflict in case of differences, this clearly has implications that give the present notion of "diversity" at least a slight Orwellian whiff.
That's intriguing. Would you care to mention some of the sorts of diversity which usually aren't on the radar?
I think I'm going to stop responding to this thread, because everyone seems to be assuming I'm meaning or asking something that I'm not. I'm obviously having some problems expressing myself, and I apologize for the confusion that I caused. Let me try once more to clarify my position and intentions:
I don't really care how diverse Less Wrong is. I was, however, curious how diverse the community is along various axes, and was interested in sparking a conversation along those lines. Vladimir's comment is exactly the kind of questions I was trying to encourage, but instead I feel like I've been asked to defend criticism that I never thought I made in the first place.
I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.
My vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population.
This is probably explained completely by Lw's tendency to attract <strike>weirdos</strike> people who are willing to question orthodoxy.
Ignoring the obviously political issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.
The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.
I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.
The flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again.
ETA: apparenty datadataeverywhere is female.
No, I think that's a much less flippant answer :-)
This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...
In other words, here be dragons.
* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.
That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US!
It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them.
The Tale of Genji has gone on my list of books to read. Thanks!
Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas.
I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing:
The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition.
The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless.
The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles.
The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: 'while sick, pay priests to chant.' If chanting doesn't work, hire more priests. (One remarkable freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it's still sickening to read through dozens of people dying amidst chanting. In comparison, the bizarre superstitions that guide many characters' activities (trapping them in their houses on inauspicious days) are practically unobjectionable.
Eliezer has been accused of delusions of grandeur for his belief in his own importance. But if Eliezer is guilty of such delusions then so am I and, I suspect, are many of you.
Consider two beliefs:
The next millennium will be the most critical in mankind’s existence because in most of the Everett branches arising out of today mankind will go extinct or start spreading through the stars.
Eliezer’s work on friendly AI makes him the most significant determinant of our fate in (1).
Let 10^N represent the average across our future Everett branches of the total number of sentient beings whose ancestors arose on earth. If Eliezer holds beliefs (1) and (2) then he considers himself the most important of these beings and the probability of this happening by chance is 1 in 10^N. But if (1) holds then the rest of us are extremely important as well through how our voting, buying, contributing, writing… influences mankind’s fate. Let say that makes most of us one of the trillion most important beings who will ever exist. The probability of this happening by chance is 1 in 10^(N-12).
If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.
It's not about the numbers, and it's not about Eliezer in particular. Think of it this way:
Clearly, the development of interstellar travel (if we successfully accomplish this) will be one of the most important events in the history of the universe.
If I believe our civilization has a chance of achieving this, then in a sense that makes me, as a member of said civilization, important. This is a rational conclusion.
If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who does this. The problem is that nobody is going to do this, because building a starship in your garage is simply impossible; it's just too hard a job to be done that way.
You assume it is. But maybe you will invent AI and then use it to design a plan of how to build a starship in your garage. So it's not simply impossible. It's just unknown and even if you could theres no reason to believe that would be a good decision. But hey, in a hundred years, who knows what people will build in their garages, or the equivalent of. I immagine people a hundred years ago would believe our projects to be pretty strange.
I'd like to discuss, with anyone who is interested, the ideas of Metaphysics Of Quality, by Robert Pirsig (laid out in Lila, An enquiry into Morals)
There are many aspects to MOQ that might make a rationalist cringe, like moral realism and giving evolution a path and purpose. But there are many interesting concepts which i heard for the first time when I read MOQ. The fourfold division of inorganic, biological, social and intellectual static patterns of quality is quite intruiging. Many things that the transhumanist community talks about actually interact at the edges of these definitions.
nanotech runs at the border of inorganic quality and biological quality.
evolutionary psychology runs at the border of biological and social quality
at a much simpler level, a community like less wrong runs at the border of social and intellectual quality
Inspite of this, I find the layered level of this understanding is probably useful in understanding present systems and designing new systems.
Maintaining stability at a lower level of quality is probably very important whenever new dynamic things are done at a higher level. Freidrich Hayek emphasises the rule of law and stable contracts, which are the basis of the dynamism of the free market.
Francis Fukuyama came out with the idea of "The end of history" with democratic liberalism being the final system, a permanent social static quality. This was an extremely bold view, but someone who understood even a bit of MOQ could understand that changes at a lower level could stil happen. No social structure can be permanent without the biological level being fixed. And Bingo! Fukuyama being a smart man, understood this and his next book was "Our posthuman future", which urged the extreme social control of biological manipulation, in particular, ceasing research.
In Pirsig's view, social quality overriding biological quality is moral. I don't agree with Pirsig's view that when social quality overrides biological quality, it is always moral. It is societal pressure that creates incentives for female infanticide in India, which overrides the biological 50-50 ratio. This will result in huge social problems in the future.
A proper understanding of the universe, when we arrive at it, would have all these intricate layers laid out in detail. But it is interesting to talk about even now,when the picture is incomplete.
I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.
I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackle the irrational tangle that many of our institutions appear to be caught up in, so I thought this would be the perfect place to get some expertise. The project is kind of at a standstill, and it really needs some advice and leads (and collaborators), so please feel free to praise, criticize, advise, or even join.
I saw orthonormal's "welcome to LessWrong post," so I guess this is where to post before I build up enough points. I hope it isn't too long of an introductory post for this thread?
The aim of the project is to achieve a population that is more educated in the basics of climate change science and policy, with the hope that a more educated voting public will be a big step towards achieving the policies necessary to deal with climate change.
The basic problem of educating the public about climate change is twofold. First, people sometimes get trapped into “information cocoons” (I am using Cass Sunstein’s terminology from his book Infotopia). Information cocoons are created when the news and information people seek out and surround themselves with is biased by what they already know. They are either completely unaware of competing evidence or if they are, they revise their network of beliefs to deny the credibility of those who offer it rather than consider it serious evidence. Usually, this is because they believe it is more probable that those people are not credible than that they could be wrong. This problem has always existed, and has perhaps increased since the rise of the personalized web. People who are trapped in information cocoons of denial of anthropogenic climate change will require much more evidence and counterarguments before they can begin to revise an entire network of beliefs that support their current conclusions.
Second, the population is uneducated about climate change because they lack the incentive to learn about the issues. Although we would presumably benefit if everyone were to take the time to thoroughly understand the issue, the individual cost and benefit of doing so actually runs the other way. Because the benefits of better policies accrue to everybody, but the costs are borne by the individual, people have an incentive to free ride, to let everybody else worry about the issue because either way, their individual contribution means little, and everybody else can make the informed decision. But of course, with everybody reasoning in this way there is a much lower level of education on these issues than optimal (or even necessary to create the necessary change, especially if there are interest groups with opposing goals).
The solution is to institute some system that can crack into these information cocoons and at the same time provide wide-ranging personal incentives for participating. For the former, we propose to develop a layman’s guide to climate change science and economic and environmental policy. Many of these are already in existence, although we have some different ideas about how to make it more transparent to criticism and more thorough in its discussion of epistemic uncertainty surrounding the whole issue. There is definitely a lot we can learn from LessWrong on this point). Also, I think we have a unique idea about developing a system of personal incentives. I will discuss this latter issue first.
(sorry of this comment is too long, continued from above) Creating Incentives
Of course, a sense of public pride exists in many people, and this has led large numbers of people to learn about the issues without external inducements. But the population of educated voters could be vastly increased if there were these personal benefits, especially for groups where environmentalism has not become a positive norm.
While we have thought about other approaches to creating these wide-ranging personal incentives, specifically, material prizes and the intangible benefits of social networking and personal pride (such as are behind Wikipedia or Facebook’s success), it appears that these are difficult to apply to the issue of climate change. Material prizes would be costly to fund, especially to make them worth the several hours necessary to learn about the issues. The issues are difficult enough, and the topic possibly scary enough, that it is not necessarily fun to learn about them and discuss with your friends. For another, it takes time and a little bit of dedicated thinking to achieve an adequate understanding of the problem, but part of the incentive to do so on Wikipedia—to show off your genuine expertise on the topic, even if anonymous—is exactly not what is supposed to happen when there is an educated populace on the topic: you will not be a unique expert, just another person who understands the issue like everyone else. The sense of urgency and personal importance needed to spur people to learn just is not there with these modes of incentivization.
But there is one already extremely effective way that companies, schools, and other organizations incentivize behavior that has little to do with immediate personal benefits. These institutions use their ability to advance or deter people’s future careers to motivate performance in certain areas. The gatekeepers to these future prospects can use their position to bring about all kinds of behavior that would otherwise seem to be a huge burden on those individuals. Ordinary hiring and admissions processes, for example, can impose large writing and learning requirements on their applicants, but because the personal benefits of getting into these organizations are enormous, people are more than willing to fulfill these requirements. Oftentimes, these requirements do not even necessarily have much to do with the stated purpose of the organization, but are used as filtering mechanisms to determine which are the best candidates. Admissions essays are not what universities set out to produce, but rather a bar they set to see which candidates can do well. These bars (known as “sorting mechanisms” in economics) sometimes have additional beneficial effects such as increased writing practice for future students, but not necessarily. For example, polished CV writing is a skill that is only good for overcoming these bars, without additional personal or social benefits. But because these additional effects are really only secondary attributes of the main function of the hurdle, the bar can be modified in ways that create socially beneficial purposes without affecting their main function.
So our specific proposal is to leverage employers’ and schools’ gatekeeper status to impose a hiring hurdle, similar to a polished CV or a high standardized test score, of learning about contemporary climate change science and policy. This hiring hurdle would act much like other hiring hurdles imposed by organizations, but would create a huge personal incentive for individuals to learn about climate change in place of or in addition to the huge personal incentive to write good covering letters or scoring well on the SATs.
The hiring hurdle would be implemented by a third party, a website that acts both as the layman’s guide to climate change science and policy (possibly with something that already exists, but hopefully with something more modular) and as a secure testing center of this knowledge. The website would provide an easy way for people to learn about the most up to date climate science and different policy options available, something that could probably be read and understood with an afternoon’s effort. Once the individual feels that he or she understands the material well enough, a secure test can be taken which measures the extent of that individuals’ climate knowledge. (This test could be retaken if the individual is dissatisfied with the result, or it could be imposed again once new and highly relevant information is discovered). The score that individuals receive could be reported to institutions they apply to. This score would be just one more tickbox for institutions to check before accepting their applicants, and they could determine the score they require.
The major benefit of this approach is that it creates enormous personal incentives for a very small cost. Companies and other institutions already have hiring hurdles in place, and they do not have to burden their HR staff with hundreds of climate change essays but just a simple score that they could look up on the website. The website itself can be hosted for a relatively small cost, and institutions can sign up to the program as more executives and leaders are convinced that this is a good idea.
Presumably, it is much easier to convince a few people who are in charge of such organizations that climate change education is important than to convince individual members of the public. Potentially, this project could affect millions, especially if large corporations such as McDonalds or Walmart or universities with many applicants sign on to the program. Furthermore, approaching the problem of global climate change through nongovernmental institutions seems like a good approach because it avoids the stasis in many public institutions, and it can be done by convincing much fewer stakeholders. Also, many of these institutions have an increasingly global scope.
Developing a platform to combat “information cocoons” yet retain legitimacy
The major problem is that this type of incentivizing might be seen as a way of buying off or patronizing voters, but this appears to be necessary to break the “information cocoons” that many people unknowingly fall into.
Hopefully a charge of having a political agenda can be answered by allowing a certain amount of feedback and continuing development of the guide as more arguments are voiced. Part of the website will be organized so that dissent can be voiced publicly and openly, but only in an organized and reasoned way (something like lesswrong but with stricter limits on posting). The guide would have to maintain public legitimacy by being open to criticism and new evidence as we discover more and also display the evidence that is supporting the current arguments. We would like to include a rating system, something like Rotten Tomatoes, where we have climate experts and the general public vote on various arguments and scenarios that are developed (but this would probably be only for those who develop a specific interest, not part of the testable guide. Of course, the testable guide would follow major developments on this more detailed information). We have thought of using an argument map to better organize such information.
But still, it could not be so flexible that those previous information cocoons redevelop on the website, and a similar polarization occurs on the website as before. Some degree of control is necessary to drive some points home. Thus, a delicate balance might have to be achieved.
That sums up pretty much the ideas to this point. At this point, the project is pretty much all theorizing, although we have found a couple of programmers who might help for a reduced fee (Know of anyone that would be interested in this for free?) and are looking into some funding sources. This would be a large scale attempt at rational debate and discussion, spurred by a mechanism to encourage everybody to participate, so please if you have any advice it would be enormously appreciated.
Sincerely, Allen Wang
This seems to have the same problem as teaching evolution in high school biology classes: you can pass a test on something and not believe a word of it. Cracking an information cocoon can be damn hard; just consider how unusual religious conversions are, or how rarely people change their minds on such subjects as UFOs, conspiracy theories, cryonics, or any other subject that attracts cranks.
Also, why should employers care about a person's climate change test score?
Finally, why privilege knowledge about climate change, or all things, by using it for gatekeeping, instead of any of the many non-controversial subjects normally taught in high schools, for which SAT II subject tests already exist?
I have recently had the experience of encountering an event of extremely low probability.
Did I just find a bug in the Matrix?
A question about modal logics.
Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.
It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.
What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. Low expressiveness, ambiguous interpretations, far too many paradoxes that seem to be more about failing to specify underlying logic correctly than about actual problems, and no convergence on a single deontic logic than works.
After reading all this, I made a few quick attempts at defining logic of obligations, just to be sure it's not some sort of collective insanity, but they all ran into very similar problems extremely quickly.
Now I'm in no way deontologically inclined, but if I were it would really bother me. If it's really impossible to formally express obligations, this kind of ethics is built on extremely flimsy basis. Consequentialism has plenty of problems in practice, but at least in hypothetical scenarios it's very easy to model correctly. Deontic logic seems to lack even that.
Is there any kind of deontic logic that works well that I missed? I'm not talking about solving FAI, constructing universal rules of morality or anything like it - just about a language that expresses exactly the kind of obligations we want, and which works well in simple hypothetical worlds.
Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.
Is the Open Thread now deprecated in favour of the Discussion section? If so, I suggest an Open Thread over there for questions not worked out enough for a Discussion post. (I have some.)
Omega comes up to you and tells you that if you believe in science it will make your life 1000 utilons better. He then goes on to tell you that if you believe in god, it will make your afterlife 1 million utilons better. And finally, if you believe in both science and god, you won't get accepted into the afterlife so you'll only get the 1000 utilons.
If it were me, I would tell omega that he's not my real dad and go on believing in science and not believing in god.
Am I being irrational?
EDIT: if omega is an infinitely all-knowing oracle, the answer may be different than if omega is ostensibly a normal human who has predicted many things correctly. Also by "to believe in science" I mean to pursue epistemic rationality as a standard for believing things rather than, for example, literal interpretation of the bible.
The definition of Omega includes him being completely honest and trustworthy. He wouldn't tell you "I will make your afterlife better" unless he knew that there is an afterlife (otherwise he couldn't make it better), just like he wouldn't say "the current Roman Emperor is bald". If he were to say instead "I will make your afterlife better, if you have one", I would keep operating on my current assumption that there is no such thing as an afterlife.
Oh, I almost forgot - what does it even mean to "believe in science"?
\>equals(correct_reasoning , Bayesian_inference)
This server is really slow.
I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.
My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.
I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?
I don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well.
See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.
Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we've compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn't yet on our list--or which doesn't quite match the way we worded it--or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).
This is not to say that I wouldn't like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.
As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.
Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.
It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.
Edit: typo correction - insert missing words
A minute in Konkvistador's mind:
I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed.
I don't think there's anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.
Can I haz evil soul crushing idea plz?
But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don't take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious.
What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I'm quite sure there are a few that could prove very toxic to humans.
The LW member that take it seriously are doing a horrible job of it.
Upvoted for the cat picture.
As a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome).
At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else - specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.
The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.
I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.
Sounds related to the failure class I call "living in the should-universe".
Righteous indignation is a good word for it.
I, personally, see it as one of the emotional capacities of a healthy person. Kind of like lust. It can be misused, it can be a big time-waster if you let it occupy your whole life, but it's basically a sign that you have enough energy. If it goes away altogether, something may be wrong.
I had a period a few years ago of something like anhedonia. The thing is, I also couldn't experience righteous indignation, or nervous worry, or ordinary irritability. It was incredibly satisfying to get them back. I'm not a psychologist at all, but I think of joy, anger, and worry (and lust) as emotions that require energy. The miserably lethargic can't manage them.
So that's my interpretation and very modest defense of righteous indignation. It's not a very practical emotion, but it is a way of engaging personally with the world. It motivates you in the minimal way of making you awake, alert, and focused on something. The absence of such engagement is pretty horrible.
Pardon the self-promotion, but that sounds like the feeling of recognizing a SAMEL, i.e. that there is some otherwise-ungrounded inherent deservedness of something in the world.
(SAMEL = subjuctive acausal means-end link, elaborated in article)
I've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness."
As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link was this. I don't know about the quality of the study, but you could use the term.
In myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others.
I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature.
I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.
Interestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time, which initiated what has thus far been my greatest period of development as a rationalist.
It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer.
It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.
I made this site last month: areyou1in1000000.com
Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.
Not sure what the current state of this issue is, apologies if it's somehow moot.
I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.
Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.
So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.
http://www.smbc-comics.com/index.php?db=comics&id=2012#comic
http://www.smbc-comics.com/index.php?db=comics&id=2005#comic
http://www.smbc-comics.com/index.php?db=comics&id=1986#comic
I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.
In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).
It looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.
My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.
Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?
I'm a translator between people who speak the same language, but don't communicate.
People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.
On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").
Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day compile them into a guide written for an audience much like this one. Do you have questions about how to communicate with people who think very much unlike you, or about specific situations that frustrate you? Would you like me to explain what appears to be an arbitrary point of etiquette? Anything else related to the topic which you'd like to see addressed?
In short: "I understand the weird emotional people who are always yelling at you, but I'm also capable of speaking your language. Ask me anything."
[1] These are both phrased as pejoratively as I could manage, on purpose. Neither extreme is healthy.
I wanted to say thank you for providing these services. I like performing the same translations, but it appears I'm unable to be effective in a text medium, requiring immediate feedback, body language, etc. When I saw some of your posts on old articles, apparently just as you arrived, I thought to myself that you would genuinely improve this place in ways that I've been thinking were essential.
Thanks! That's actually really reassuring; that kind of communication can be draining (a lot of people here communicate naturally in a way which takes some work for me to interpret as intended). It is good to hear that it seems to be doing some good.
One issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"
I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.
For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.
Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.
The corollary to the implied "your beliefs are wrong" is "I know better than you" (because that's how you would tell that they're wrong). This is an incredibly rude signal to send to--well, anyone, but especially to another adult. Your hackles probably rise too when someone signals that they're superior to you and you don't agree; this is the same thing.
The point, then, is not that you need to accept what people you care about say with credulity. It's that you need to accept it with respect. You do not have any greater value than the person you're talking to (even if you are smarter and more rational), just like they don't have any greater value than you (even if they're richer and more attractive). Even if you really were by some objective measure a better person (which is, as far as I can tell, a useless thing to consider), they don't think so, and acting like it will get you nowhere.
Possibly one of the hardest parts of this to swallow is that, when you're choosing words for the purpose of making another person remain comfortable talking to you, whether their beliefs are a good reflection of reality is not actually important. Obviously they think so, and merely contradicting them won't change that (nor should it). So if you sound like you're just trying to convince them that they're wrong, even if that isn't what you mean to do, they might just feel condescended to and walk away.
None of this means that you can't express your own beliefs vehemently ("gay people deserve equal rights!"). It just means that when someone expresses one of theirs, interrogating them bluntly about their reasons--especially if they haven't questioned them before--is more likely to result in defensiveness than in convincing them or even productive debate. This may run counter to your instincts, understandably, but there it is.
No fuzzy words in the world will soften your language if their inflection reveals intensity and superiority. Display real respect, including learning to read your audience and back off when they're upset. (You can always return to the topic another time, and in fact, occasional light conversations will probably do a better job with this sort of person than one long intense one.) If you aren't able to show genuine respect, well, I don't blame them for refusing to discuss their beliefs with you.
Yes please.
Does the term "bridger" ring a bell for you? (It's from Greg Egan's Diaspora, in case it doesn't, and you'd have to read it to get why I think that would be an apt name for what you're describing.)
Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here
test deleted
The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/
I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.
I've noticed this too. There's no easy way to 'unfold all' is there?
In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?
This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!
Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.
It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to keep that in mind too, and I want to make the universe a less deadly place for everyone.
(If you feel like voting this comment up, please review this first.)
I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.
We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin was fair, or at least assign some likelihood distribution.
But not my frequentist archnemesis! He let it be known that he would level half the continent if the probability of getting Heads wasn't determined by his Expectation divided by the number of events. The number of events. Of an imaginary coin toss. Determine that toss' probability.
It occurs to me that there was a lot of set up for very little punch line in that anecdote. If you are unamused, you are in good company. I ordered R to calculate an integral for me today, and it politely replied: "Error in is.function(FUN) : 'FUN' is missing""
NYT magazine covers engineers & terrorism: http://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t.html
An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.
At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?
For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guaranteed to be a very uninformed guess, but some guess is possible, right?
The observer can establish a CDF of the probability of the light turning off at time t; for t <= tx, p=0. For t > tx, 0 < p < 1, assuming that the observer can never be certain that the light will ever turn off. What goes on in between is the interesting part, and I haven't the faintest idea how to justify any particular shape for the CDF.
Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.
Thanks.
Do you really need a "theory of mind" for that?
Our partners are not a foreign species. Communicate lots in an open and honest manner with hir and try to understand what makes that particular person click.
Yes. You are assuming ze has a high level of introspection which would facilitate communication. This isn't always the case.
Yes, you do. Many people who have highly developed theories of mind seem to underestimate how much unconscious processing they are doing that is profoundly difficult for people to do who don't have as developed theories of mind. People who are mildly on the autism spectrum in particular (generally below the threshold of diagnosis) often have a lot of difficulty with this sort of unconscious processing but can if given a lot of explicit rules or heuristics do a much better job.
Thank you. I believe I may fall in this category. I am highly quantitative and analytical, often to my detriment.
I have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for.
The Usual Error is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book.
If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine.
A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war).
I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who are accustomed to expressing an idea by using the first words which occur to them to do so (almost everyone), this requires flexing mental muscles which don't see much use. I think of myself as a good communicator, and it's still hard for me to follow NVC when I'm upset. But the difficulty is part of the point--by forcing you to stop and rethink how you talk about the conflict, it forces you see it in a way that's less hindered by emotional reflex and more productive towards understanding what's going on and finding a solution.
Neither of these suggestions requires that your partner also read them, but it would probably help. (It just keeps you from having to explain a method you're using.)
If you find a good resource for this which is a blog, I'd be interested in it as well. Maybe obviously, this topic is something I think a lot about.
Both look rather useful, thanks for the suggestions. Also, Google Books has Nonviolent Communication.
I could point to some blogs whose advice seems good to me, but I won't because I think I can help you best by pointing only to material (alas no blogs though) that has actually helped me in a serious relationship -- there being a huge difference in quality between advice of the form "this seems true to me" and advice of the form "this actually helped me".
What has helped me more in my relationships than any other information has is the non-speculative parts of the consensus among evolutionary psychologists on sexuality because they provide a vocabulary for me to express hypotheses (about particular situations I was facing) and a way for me to winnow the field of prospective hypotheses and bits of advice I get online from which I choose hypotheses and bits of advice to test. In other words, ev psy allows me to dismiss many ideas so that I do not incur the expense of testing them.
I needed a lot of free time however to master that material. Probably the best way to acquire the material is to read the chapters on sex in Robert Wright's Moral Animal. I read that book slowly and carefully over 12 months or so, and it was definitely worth the time and energy. Well, actually the material in Moral Animal on friendship (reciprocal altruism) is very much applicable to serious relationships, too, and the stuff on sex and friendship together form about half the book.
Before I decided to master basic evolutionary psychology in 2000, the advice that helped me the most was from John Gray, author of Men Are From Mars, Women Are From Venus.
Analytic types will mistrust author and speaker John Gray because he is glib and charismatic (the Maharishi or such who founded Transcendental Meditation once offered to make Gray his successor and the inheritor of his organization) but his pre-year-2000 advice is an accurate map of reality IMHO. (I probably only skimmed Mars and Venus, but I watched long televised lectures on public broadcasting that probably covered the same material.)
An Alternative To "Recent Comments"
For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.
Explanation of features:
To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.
ETA: I've placed the script is in the public domain. Chrome is not supported.
Here's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php
I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.
The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.
Human intelligence, certainly; but just intelligence, I'm not so sure.
Relevant to our akrasia articles:
http://www.marginalrevolution.com/marginalrevolution/2010/09/should-you-bet-on-your-own-ability-to-lose-weight.html
I recall someone claiming here earlier that they could do anything if they bet they could, though I can't find it right now. Useful to have some more explicit evidence about that.
Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)
In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.
This sounds pretty exhaustive.
I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.
I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:
Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).
Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.
Not rigged (or not obviously so): The player shouldn't have the feeling that the game is designed to directly reward rationality (even though it is, in a sense). The player should think that they are solving a general problem with rationality as their asset.
Not allegorical: I don't want to raise any likely mind-killing associations in the player's mind, like politics or religion. The problem they are solving should be allegorical to real world problems, but to a general class of problems, not to any specific problems that will raise hackles and defeat the educational purpose of the game.
Surprising: The rationality technique being taught should not be immediately obvious to an untrained player. A typical first session should involve the player first trying an irrational method, seeing how it fails, and then eventually working their way up to a rational method that works.
A lot of the rationality-related games that people bring up fail some of these criterion. Zendo, for example, is not "dramatic in outcome" enough for my taste. Avoiding confirmation bias and understanding something about experimental design makes one a better Zendo player... but in my experience not as much as just developing a quick eye for pattern recognition and being able to read the master's actions.
Anyone here have any suggestions for possible game designs?
One idea I'd like to suggest would be a game where the effectiveness of the items a player has changes randomly hour by hour. Maybe a MMO with players competing against each other, so that they can communicate information about which items are effective. Introduce new items with weird effects every so often so that players have to keep an eye on their long term strategy as well.
I think a major problem with that is that most players would simply rely upon the word on the street to tell them what was currently effective, rather than performing experiments themselves. Furthermore, changes in only "effectiveness" would probably be too easy to discover using a "cookbook" of experiments (see the NetHack discussion in this thread).
I'm thinking that the parameters should change just quickly enough to stop consensus forming (maybe it could be driven by negative feedback, so that once enough people are playing one strategy it becomes ineffective). Make using a cookbook expensive. Winning should be difficult, and only just the right combination will succeed.
I think this makes sense, but can you go into more detail about this:
I didn't mean a cookbook as an in-game item (I'm not sure if that's what you were implying...), I meant the term to mean a set of well-known experiments which can simply be re-ran every time new results are required. If the game can be reduced to that state, then a lot of its value as a rationality teaching tool (and also as an interesting game, to me at least) is lost. How can we force the player to have to come up with new ideas for experiments, and see some of those ideas fail in subtle ways that require insight to understand?
My tendency is to want to solve this problem by just making a short game, so that there's no need to figure out how to create a whole new, interesting experimental space for each session. This would be problematic in an MMO, where replayablity is expected (though there have been some interesting exceptions, like Uru).
Ah, I meant: "Make each item valuable enough that using several just to work out how effective each one is would be a fatal mistake" Instead you would have to keep track of how effective each one was, or watch the other players for hints.
One way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0.
The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter.
Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable.
But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.
Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.
(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)
Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.
To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.
It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.
Or you could just go look up the correct answers on gamefaqs.com.
So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.
Ever played Nethack? ;)
What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.
I think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying.
The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.
Good point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.
Best I think would be if the warning came implicitly as part of the game, and a little ways into it.
For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion.
Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.
Exactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong.
So is this game idea something feasible and which meets your criteria?
I think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens; maybe the game could be called Standing On The Eyestalks Of Giants.
On the particular criteria:
Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount.
Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias.
Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising).
Not rigged: Assuming the interface for modelling the game world's physics and doing experiments is sophisticated enough, this should prevent the feeling that the player can win by just finding the button marked "I Am Rational" and hitting it. However, I think this is the trickiest part programming-wise.
I'm going to look into IF programming a bit to figure out how implementable some of this stuff is. I won't and can't make promises regarding timescale or even completability, however: I have several other projects going right now which have to take priority.
Thanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea.
Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!"
Different magnetic field too, so they can't just say, "lol, make a compass, it points north".
I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself.
Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.
I'm not sure if transformice counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results seem to support ciphergoth's conjecture on intelligence levels.
Transformice is awesome :D A game hasn't made me laugh that much for a long time.
And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.
Note also the Wiki page, with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!)
One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky. That's more about SIAI propaganda than rationality though.
One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback.
For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad.
So basically the idea would be that the rules of the game you're really playing wouldn't be the ones you would think at first glance, which is a pretty good metaphor for real life too.
It needs to be well-designed enough so that it's not "guessing the programmer's password", but that should be possible.
Making a game around experiment design would be interesting too - have some kind of physics / chemistry / biology system that obeys some rules (mostly about transformations, not some "real" physics with motion and collisions etc.), have game mechanics that allow you to do something like experimentation, and have a general context (the feedbacks you get, what other characters say, what you can buy) that points towards a slightly wrong understanding of reality. This is bouncing off Silas' ideas, things that people say are good for you may not really be so, etc.
Here again, you can exploit the conventions of video games to mislead the player. For example, red creatures like eating red things, blue creatures like eating blue things, etc. - but the rule doesn't always hold.
"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky."
See also: The Friendly AI Critical Failure Table
And I think all of the other suggestions you made in this comment would make an awesome game! :D
Riffing off my weird biology / chemistry thing: a game based on the breeding of weird creatures, by humans freshly arrived on the planet (add some dimensional travel if you want to justify weird chemistry - I'm thinking of Tryslmaistan.
The catch is (spoiler warning!), the humans got the wrong rules for creature breeding, and some plantcrystalthingy they think is the creatures' food is actually part of their reproduction cycle, where some essential "genetic" information passes.
And most of the things that look like in-game help and tutorials are actually wrong, and based on a model that's more complicated than the real one (it's just a model that's closer to earth biology).
I think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed.
Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.
The problem is that, simply put, such games generally fail on the "fun" meter.
There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.
The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.
Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.
The game is filled with awesome flavor, and a lot of awesome mechanics. (Specifically mechanics I had imagined independently and wanted to make my own game regarding). It looked to me like one of the coolest sounding games ever. And it was amazingly NOT FUN AT ALL for the first four hours of play. I stuck with it anyway, if for no other reason than to figure out how a game with such awesome ideas could turn out so badly. Eventually I learned how to play, and while it never became fun it did become beautiful and poignant and it's now one of my favorite games ever. But most people do not stick with something they don't like for four hours.
Toying with player's expectations sounds cool to the people who understand how the toying works, but is rarely fun for the player themselves. I don't think that's an insurmountable obstacle, but if you're going to attempt to do this, you need to really fathom how hard it is to work around. Most games telegraph everything for a reason.
Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?
*updates*
I hadn't heard of that game, I might try it out. I'm actually surprised a game like that was made and commercially published.
It was made by a Russian developer which is better known for its previous effort, Pathologic, a somewhat more classical first-person adventure game (albeit very weird and beautiful, with artistic echoes from Brecht to Dostoevskij), but with a similar problem of being murderously hard and deceptive - starving to death is quite common. Nevertheless, in Russia Pathologic had acceptable sales and excellent critical reviews, which is why Ice-Pick Lodge could go on with a second project.
It's a good game, just with a very narrow target audience. (This site is probably a good place to find players who will get something out of it, since you have higher than average percentages of people willing to take a lot of time to think about and explore a cerebral game).
Some specific lessons I'd draw from that game and apply here:
Don't penalize failure too hard. The Void's single biggest issue (for me) is that even when you know what you're doing you'll need to experiment and every failure ends with death (often hours after the failure). I reached a point where every time I made even a minor failure I immediately loaded a saved game. If the purpose is to experiment, build the experimentation into the game so you can try again without much penalty (or make the penalty something that is merely psychological instead of an actual hampering of your ability to play the game.)
Don't expect players to figure things out without help. There's a difference between a game that teaches people to be rational and a game that simply causes non-rational people to quit in frustration. Whenever there's a rational technique you want people to use, spell it out. Clearly. Over and over (because they'll miss it the first time).
The Void actually spells out everything as best they can, but the game still drives players away because the mechanics are simply unlike any other game out there. Most games rely on an extensive vocabulary of skills that players have built up over years, and thus each instruction only needs to be repeated once to remind you of what you're supposed to be doing. The Void repeats instructions maybe once or twice, and it simply isn't enough to clarify what's actually going on. (The thing where NPCs lie to you isn't even relevant till the second half of the game. By the time you get to that part you've either accepted how weird the game is or you've quit already).
My sense is that the best approach would be to start with a relatively normal (mechanics-wise) game, and then have NPCs that each encourage specific applications of rationality, but each of which has a rather narrow mindset and so may give bad advice for specific situations. But your "main" friend continuously reminds you to notice when you are confused, and consider which of your assumptions may be wrong. (Your main friend will eventually turn out to be wrong/lying/unhelpful about something, but only the once and only towards the end when you've built up the skills necessary to figure it out).
This was my experience with the Void exactly. Basically all the mechanics and flavors were things I had come up with one my own that I wanted to make games out of, and I'm really glad I played the Void first because I might have wasted a huge chunk of time making a really bad game if I didn't get to learn from their mistakes.
Text adventures seem suitable for this sort of thing, and are relatively easy to write. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.
RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could
Sorry if this isn't very fleshed-out, just a possible direction.
The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.
That's a really good article, the Microsoft humans really know their stuff.
In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.
Something I learned myself that the article supported: taking tests increases retention
Something I learned from the article: varying study location increases retention.
Did anyone here read Buckminster Fullers synergetics? And if so did understand it?
Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?
Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.
You beat me to it :)
Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?
No, but the book relies on kin selection to some extent: it's beneficial to share resources with your tribe, but not other tribes.
Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.
It's very silly. What he's saying is that there are properties at high levels of organizations that don't exist at low levels of organizations.
As Eliezer says, emergence is trivial. Everything that isn't quarks is emergent.
His "universality" argument seems to be that different parts can make the same whole. Well of course they can.
He certainly doesn't make any coherent arguments. Maybe he does in his book?
Yet another example of a Nobel prize winner in disagreement with Eliezer within his own discipline.
What is wrong with these guys?
Why if they would just read the sequences, they would learn the correct way for words like "reduction" and "emergence" to be used in physics.
To be fair, "reductionism is experimentally wrong" is a statement that would raise some argument among Nobel laureates as well.
Argument from some Nobelists. But agreement from others. Google on the string "Philip Anderson reductionism emergence" to get some understanding of what the argument is about.
My feeling is that everyone in this debate is correct, including Eliezer, except for one thing - you have to realize that different people use the words "reductionism" and "emergence" differently. And the way Eliezer defines them is definitely different from the way the words are used (by Anderson, for example) in condensed matter physics.
If the first hit is a fair overview, I can see why you're saying it's a confusion in terms; the only outright error I saw was confusing "derivable" with "trivially derivable."
If you're saying that nobody important really tries to explain things by just saying "emergence" and handwaving the details, like EY has suggested, you may be right. I can't recall seeing it.
Of course, I don't think Eliezer (or any other reductionist) has said that throwing away information so you can use simpler math isn't useful when you're using limited computational power to understand systems which would be intractable from a quantum perspective, like everything we deal with in real life.
Finally Prompted by this, but it would be too offtopic there
http://lesswrong.com/lw/2ot/somethings_wrong/
The ideas really started forming around the recent 'public relations' discussions.
If we want to change people's minds, we should be advertising.
I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.
There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.
My guess is that "Harry Potter and the Methods of Rationality" is the best piece of publicity the SIAI has ever produced.
I think that the only way to top it would be a Singularity/FAI-themed computer game.
How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?
Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.
I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)
Unfortunately, I don't think we can get people excited about bringing about a Friendly singularity by speaking honestly about how it happens purely at the object level, because what actually needs to be done is tons of math (plus some outreach and maybe paper-writing and book-writing and eventually a lot of coding). Saving the world isn't actually going to be an exciting ultimate showdown of ultimate destiny, and any marketing and publicity shouldn't be setting people up for disappointment by portraying it as such... and it should also be making it clear that even if existential risk reduction were fun and exciting, it wouldn't be something you do for yourself because it's fun and exciting, and you don't do it because you get to affiliate with smart/high-status people and/or become known as one yourself, and you don't do it because you personally want to live forever and don't care about the rest of the world, you do it because it's the right thing to do no matter how little you personally get out of it.
So we don't want to push the public further toward thinking of the singularity as a geek / sci-fi / power-fantasy / narcissistic thing (I realize some of those are automatic associations and pattern completions that people independently generate, but that's to be resisted and refuted rather than embraced). Fiction that portrays rationality as virtuous (and transparent, as in the Rationalist Fanfiction Principle) and that portrays transhumanistic protagonists that people can identify with (or at least like) is good because it makes the right methods and values salient and sympathetic and exciting. Giving people a vision of a future where humanity has gotten its shit together as a thing-to-protect is good; anything that makes AI or the Singularity or even FAI seem too much like an end in itself will probably be detrimental, especially if it is portrayed anywhere near anthropomorphically enough for it to be a protagonist or antagonist in a video game.
Only if they can be lured to the Light Side. The *chans seem rather tribal and amoral (at least the /b/s and the surrounding culture; I know that's not the entirety of the *chans, but they have the strongest influence in those circles). If the right marketing can turn them from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.
I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.
"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.
Thank you for taking it well; sometimes I still get nervous about criticizing. :)
I've heard the /b/ / "Anonymous" culture described as Chaotic Neutral, which seems apt. My main concern is that waiting for the right thing to become fun for them to rebel against is not efficient. (Example: Anonymous's movement against Scientology began not in any of the preceding years when Scientology was just as harmful as always, but only once they got an embarrassing video of Tom Cruise taken down from YouTube. "Project Chanology" began not as anything altruistic, but as a morally-neutral rebellion against what was perceived as anti-lulz. It did eventually grow into a larger movement including people who had never heard of "Anonymous" before, people who actually were in it to make the world a better place whether the process was funny or not. These people were often dismissed as "moralfags" by the 4chan old-timers.) Indeed they are not inherently evil, but when morality is not a strong consideration one way or the other, it's too easy for evil to be more fun than good. I would not rely on them (or even expect them) to accomplish any long-term good when that's not what they're optimizing for.
(And there's the usual "herding cats" problem — even if something would normally seem fun to them, they're not going to be interested if they get the sense that someone is trying to use them.)
Maybe some useful goal that appeals to their sensibilities will eventually present itself, but for now, if we're thinking about where to direct limited resources and time and attention, putting forth the 4chan crowd as a good target demographic seems like a privileged hypothesis. "Teenage hackers" are great (I was one!), but I'm not sure about reaching out to them once they're already involved in 4chan-type cultures. There are probably better times and places to get smart young people interested.
Looks like an interesting course from MIT:
Reflective Practice: An Approach for Expanding Your Learning Frontiers
Is anyone familiar with the approach, or with the professor?
Why "antipsychotics" is an unhelpful term even if accurate
The Idea
I am working on a new approach to creating knowledge management systems. An idea that I backed into as part of this work is the context principle.
Traditionally, the context principle states that a philosopher should always ask for a word's meaning in terms of the context in which it is being used, not in isolation.
I've redefined this to make it more general: Context creates meaning and in its absence there is no meaning.
And I've added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meaning and open a path for communication between disparate domains.
Possible Topics
I'm considering posting on how the context principle relates to certain topics. Right now I'm researching and collecting notes.
Possible topics to relate the context principle to:
My Request
I am looking for general feedback from this forum on the context principle and on my possible topics. I have only started working through the sequences so I am interested in specific pointers to posts I should read.
Perplexed has already started this off with his reply to my Welcome to Less Wrong! (2010) introduction.