Filter All time

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Alejandro1 26 January 2012 03:56:01AM 40 points [-]

Bear in mind that some contrarian statements might have been upvoted for being valuable as examples and contributions to the thread, rather than for substantial agreement. Also there is a selection effect: a contrarian sharing an unpopular opinion is very likely to upvote it when seeing a kindred spirit, but a non-contrarian who doesn't share it is unlikely to downvote it (especially in a thread like this one where the point is to encourage contrarian opinions to come out).

Comment author: Khoth 29 September 2011 10:36:16AM *  40 points [-]

Generally, if you're given evidence for something, the evidence-giver is trying to convince you of that something. If you're given only weak evidence, that itself is evidence that there is no strong evidence (if there is strong evidence, why didn't they tell you that instead?), and so in some circumstances it could be rational to downgrade your probability estimate.

In response to Consequentialism FAQ
Comment author: Vladimir_M 27 April 2011 01:03:19AM *  40 points [-]

OK, I've read the whole FAQ. Clearly, a really detailed critique would have to be given at similar length. Therefore, here is just a sketch of the problems I see with your exposition.

For start, you use several invalid examples, or at least controversial examples that you incorrectly present as clear-cut. For example, the phlogiston theory was nothing like the silly strawman you present. It was a falsifiable scientific theory that was abandoned because it was eventually falsified (when it was discovered that burning stuff adds mass due to oxidation, rather than losing mass due to escaped phlogiston). It was certainly a reductionist theory -- it attempted to reduce fire (which itself has different manifestations) and the human and animal metabolism to the same underlying physical process. (Google "Becher-Stahl theory".) Or, at another place, you present the issue of "opposing condoms" as a clear-cut case of "a horrendous decision" from a consequentialist perspective -- although in reality the question is far less clear.

Otherwise, up to Section 4, your argumentation is passable. But then it goes completely off the rails. I'll list just a few main issues:

  • In the discussion of the trolley problem, you present a miserable caricature of the "don't push" arguments. The real reason why pushing the fat main is problematic requires delving into a broader game-theoretic analysis that establishes the Schelling points that hold in interactions between people, including those gravest ones that define unprovoked deadly assault. The reason why any sort of organized society is possible is that you can trust that other people will always respect these Schelling points without regards to any cost-benefit calculations, except perhaps when the alternative to violating them is by orders of magnitude more awful than in the trolley examples. (I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.)

  • In Section 5, you don't even mention the key problem of how utilities are supposed to be compared and aggregated interpersonally. If you cannot address this issue convincingly, the whole edifice crumbles.

  • In Section 6, at first it seems like you get the important point that even if we agree on some aggregate welfare maximization, we have no hope of getting any practical guidelines for action beyond quasi-deontologist heuristics. But they you boldly declare that "we do have procedures in place for breaking the heuristic when we need to." No, we don't. You may think we have them, but what we actually have are either somewhat more finely tuned heuristics that aren't captured by simple first-order formulations (which is good), or rationalizations and other nonsensical arguments couched in terms of a plausible-sounding consequentialist analysis (which is often a recipe for disaster). The law of unintended consequences often bites even in seemingly clear-cut "what could possibly go wrong?" situations.

  • Along similar lines, you note that in any conflict all parties are quick to point out that their natural rights are at stake. Well, guess what. If they just have smart enough advocates, they can also all come up with different consequentialist analyses whose implications favor their interests. Different ways of interpersonal utility comparison are often themselves enough to tilt the scales as you like. Further, these analyses will all by necessity be based on spherical-cow models of the real world, which you can usually engineer to get pretty much any implication you like.

  • Section 7 is rather incoherent. You jump from one case study to another arguing that even when it seems like consequentialism might imply something revolting, that's not really so. Well, if you're ready to bite awful consequentialist bullets like Robin Hanson does, then be explicit about it. Otherwise, clarify where exactly you draw the lines.

  • Since we're already at biting bullets, your FAQ fails to address another crucial issue: it is normal for humans to value the welfare of some people more than others. You clearly value your own welfare and the welfare of your family and friends more than strangers (and even for strangers there are normally multiple circles of diminishing caring). How to reconcile this with global maximization of aggregate utility? Or do you bite the bullet that it's immoral to care about one's own family and friends more than strangers?

  • Question 7.6 is the only one where you give even a passing nod to game-theoretical issues. Considering their fundamental importance in the human social order and all human interactions, and their complex and often counter-intuitive nature, this fact by itself means that most of your discussion is likely to be remote from reality. This is another aspect of the law of unintended consequences that you nonchalantly ignore.

  • Finally, your idea that it is possible to employ economists and statisticians and get accurate and objective consequentialist analysis to guide public policy is altogether utopian. If such things were possible, economic central planning would be a path to prosperity, not the disaster that it is. (That particular consequentialist folly was finally abandoned in the mainstream after it had produced utter disaster in a sizable part of the world, but many currently fashionable ideas about "scientific" management of government and society suffer from similar delusions.)

In response to Genes are overrated
Comment author: Vladimir_M 20 April 2011 01:16:45AM 39 points [-]

I am only an amateur in all the relevant areas of expertise, but I have invested quite a bit of effort trying to make sense of these controversies. I have to say that your post is very confused, and you seem to lack familiarity with many important facts that would have to be considered before pronouncing such a sweeping judgment.

The lack of obvious correlations between genes and phenotypes implies only that the phenotypes in question are not determined by the genotype in a simple way. If they are determined by complex interactions between genes, then straightforward association studies won't detect this connection. To make an imperfect but relevant analogy, if you took the machine code of various computer programs and did a statistical association study between these codes and the resulting behavior of the computer, while being ignorant of the way the instructions are actually decoded and executed -- as we are still largely ignorant of the relevant biochemistry, which is also far more complicated -- you could easily end up with no observable correlations.

Similarly, if some trait can be influenced by environmental factors strongly and rapidly, it is still a total fallacy to conclude that it is therefore determined purely by environmental factors. To take a trivial example, nobody disputes that hair color is highly heritable, but the development of cheap and convenient hair dyes has changed the average hair colors in the population dramatically. The behavior of computer programs is highly dependent on what you give them as input, but it doesn't mean that the program code is irrelevant.

As for heritability studies, you are certainly right that there is a lot of shoddy work, and by necessity they make a whole lot of wildly simplifying assumptions. If there existed only a handful of such studies, one would be well advised not to take them very seriously. However, the amount of data that has been gathered in recent decades is just too overwhelming to dismiss, especially taking into account that often there have been considerable ideological incentives to support the opposite conclusions.

On the whole, you are making a wholly unsubstantiated sweeping conclusion.

In response to Ask and Guess
Comment author: TheOtherDave 01 December 2010 06:50:37PM 39 points [-]

I was raised in a strong Guess culture, then went to a tech university where Askers predominated, and it took me some years to come to terms with the fact that these are simply incompatible conversational styles and the most effective thing for me to do is understand which style my interlocutor is expecting and use that.

This, amusingly, often leads me to ask people whether they are using Ask rules or Guess rules. Except, of course, in situations where I intuit that asking them would be inappropriate, and I have to guess instead.

Bringing college friends home for dinner was the most wearing version of this. On one occasion I had to explicitly explain to a friend that, for her purposes, it was best to assume that the last piece of chicken was simply unavailable to be eaten, ever, by anyone. (There actually was a method for getting it, but it was an Advanced Guess Culture technique, not readily taught in one session.)

Incidentally, my own experience is that Ask and Guess are sometimes misleading labels for the styles they refer to (though they are conventional).

For example, "Ask" culture is often OK with "So, I'm assuming here that A, B, and C are true; based on that yadda yadda" with the implicit expectation is that someone will correct me if I'm wrong. In "Guess" culture this sort of thing carries the equally implicit expectation that nobody will correct me. Here both groups are guessing, but they guess differently.

"Guess" culture also has an implicit expectation in some cases that you do ask, but that an honest answer is not actually permitted... the answer is constrained by the social rules. For example, growing up if a guest says "Well, we should get going." the host is obligated to reply "Oh, but we're having such a good time!" and none of that actually lets you know whether the guest is still welcome or not (or, indeed, whether the guest has any desire to stay or go). (On one occasion, when highly motivated to have a departing guest take leftovers home with her if and only if she actually wanted leftovers, but not knowing her default rules, I ended up saying "So, among your tribe, how many times do I have to repeat an offer to have it count as a genuine offer?")

And "Guess" culture has all kinds of rules for how you communicate to someone exactly what it is you want them to do without being asked.

Comment author: NancyLebovitz 06 October 2015 02:35:05PM 32 points [-]

I have banned advancedatheist. While he's been tiresome, I find that I have more tolerance for nastiness than some, but this recent comment was the last straw. I've found that I can tolerate bigotry a lot better than I can tolerate bigoted policy proposals, and that comment was altogether too close to suggesting that women should be distributed to men they don't want sex with.

Comment author: AnnaSalamon 28 January 2015 04:07:50AM 39 points [-]

Re: CFAR's impact: Max Tegmark of the Future of Life Institute emails (and offers for us to publicly quote him):

"CFAR was instrumental in the birth of the Future of Life Institute: 4 of our 5 co-founders are CFAR alumni, and seeing so many talented idealistic people motivated to make the world more rational gave me confidence that we could succeed with our audacious goals."

(FLI is the group that recently organized the Puerto Rico conference, and seems in general to be doing loads of high-impact good.)

Comment author: Costanza 14 July 2014 12:14:35AM 38 points [-]

What is the purpose of an experiment in science? For instance, in the field of social psychology? For instance,what is the current value of the Milgram experiment? A few people in Connecticut did something in a room at Yale in 1961. Who cares? Maybe it's just gossip from half a century ago.

However, some people would have us believe that this experiment has broader significance, beyond the strict parameters of the original experiment, and has implications for (for example) the military in Texas and corporations in California.

Maybe these people are wrong. Maybe the Milgram experiment was a one-off fluke. If so, then let's stop mentioning it in every intro to psych textbook. While we're at it, why the hell was that experiment funded, anyway? Why should we bother funding any further social psychology experiments?

I would have thought, though, that most social psychologists would believe that the Milgram experiment has predictive significance for the real world. A Bayesian who knows about the results of the Milgram experiment should better be able to anticipate what happens in the real world. This is what an experiment is for. It changes your expectations.

However, if a supposedly scientific experiment does nothing at all to alter your expectations, it has told you nothing. You are just as ignorant as you were before the experiment. It was a waste.

Social psychology purports to predict what will happen in the real world. This is what would qualify it as a science. Jason Mitchell is saying it cannot even predict what will happen in a replicated experiment. In so doing, he is proclaiming to the world that he personally has learned nothing from the experiments of social psychology. He is ignorant of what will happen if the experiment is replicated. I am not being uncharitable to Mitchell. He is rejecting the foundations of his own field. He is not a scientist.

Comment author: Viliam_Bur 06 January 2014 03:47:59PM *  34 points [-]

The linked article is too long and it is not obvious what exactly its point is. I kept repeating to myself be specific, be specific while reading it.

I believe there was most likely one specific thing that offended the author... and rest of the long unspecific article was simply gathering as many soldiers as possible to the battle -- and judging by the discussion that started here, successfully.

The summary at the end hints that it was a use of word "eugenics" somewhere on LW, or maybe somewhere on some LW fan's blog. Unless that was just a metaphor for something. The author is probably disabled and feels personally threatened by any discussion of the topic, so strongly that they will avoid the whole website if they feel that such discussion would not be banned there. Unless that, too, was a metaphor for something.

(The main lesson for me seems to be this: If you want attention, write an article accusing LW of bad things. LW can't resist this.)

Comment author: RomeoStevens 06 January 2014 05:39:30AM 38 points [-]

"there are differences that are demarcated by ethnicity" and "it sucks when people suffer" seem orthogonal to me.

Comment author: Alejandro1 20 November 2013 07:29:25PM 39 points [-]

The best title for me would have been the combination of two of the options: "Smarter Than Us: The Promise and Peril of Artificial Intelligence".

"Smarter than Us" is a more memorable and more descriptive title when mentioned on its own ("Our Fragile Future" looks like it could refer to global warming, nuclear war, or any other global risk). But the "Promise and Peril" subtitle conveys the focus on AI risk which is absent from "Rise of Machine Intelligence". The other two options are much worse IMO.

Comment author: Yvain 08 August 2013 07:51:27AM *  39 points [-]

None of these are incorporated in molecular biology books and publications that I can find. But the answer was still there: visualize what I read. But not just visualize like the little diagrams of cellular interactions books usually give you – like stupid, over-the-top, Hollywood-status visualization. I had to make it dramatic. I had to mentally reconstruct the biology of a cell in massive, fast, and explosive terms.

I'm having the same problem with molecular biology right now, and I agree with the track you're taking. The issue seems to be the large amount of structure totally devoid of any semantic cues. For example, a typical textbook paragraph might read:

JS-154 is one of five metabolic products of netamine; however, the enzyme that produces it is unknown. It is manufactured in cells in the far rostral region of of the cerebrum, but after binding with a leukocynoid it takes a role in maintaining the blood-brain barrier - in particular guiding the movements of lipid molecules.

I find I can read paragraphs like this five or six times, write them on flashcards, enter them into Anki, and my brain still refuses to understand or remember them after weeks of trying.

On the other hand, my brain easily remembers vastly more complicated structures when they're loaded with human-accessible meaning. For example, just by casually reading the Game of Thrones series, I know an extremely intricate web of genealogies, alliances, locations, journeys, battlesites, et cetera. Byte for byte, an average Game of Thrones reader/viewer probably has as much Game of Thrones information as a neuroscience Ph.D has molecular biology information, but getting the neuroscience info is still a thousand times harder.

Which is interesting, because it seems like it should be possible exploit isomorphisms between the two areas. For example, the hideous unmemorizable paragraph above is structurally identical to (very minor spoilers) :

Jon Snow is one of five children of Ned Stark; however, his mother is unknown. He was born in a castle in the far northern regions of Westeros, but after binding with a white wolf companion he took a role in maintaining the Wall - in particular serving as mentor to his obese friend Samwell.

This makes me wonder if it would be possible to produce a story as enjoyable as Game of Thrones which was actually isomorphic to the most important pathways in molecular biology. So that you could pick up a moderately engaging fantasy book - it wouldn't have to be perfect - read through it in a day or two, and then it ends with "By the way, guess what, you now know everything ever discovered about carbohydrate metabolism". And then there's a little glossary in the back with translations about as complicated as "Jon Snow = JS-154" or "the Wall = the blood-brain barrier". I don't think this could replace a traditional textbook, but it could sure as heck supplement it.

This would be very hard to do correctly, but I'd love to see someone try, so much so that it's on my list of things to attempt myself if I ever get an unexpectedly large amount of free time.

Comment author: Kawoomba 21 July 2013 10:08:30PM 38 points [-]

I'm only 10 minutes in, but thoroughly unimpressed so far. Typing in real time as I'm watching, so an unfiltered commentary and an incomplete summary rolled into one. I will probably miss some arguments, and do a bad job with others, but I only have time to watch it once. The "you" refers to Eneasz.

Your introduction, dragging in the cochlear implants, was much too uncharitable and religion-focused, your "I really don't understand how anyone would not fight death" not sincere -- I'm sure you know all the rationalisations, that some people just want to get on with their lives not being dominated by the thought of dying, which they personally feel powerless against. You're the host, your introduction primed the debate on religion, which is among the least interesting aspects (to me), and made Brin talk about monasteries later on. Thanks ...

PZ Myers is going on and on about cancer and stem cells on the one hand, and entropy on the other.

EY thankfully clears up the definitions (it makes no sense to talk about actual "immortality", as opposed to extended longevity), and points out the stem cell talk missed the point. So far exactly what I'd hoped he would say, getting the debate out of the "cancer and stem cells" mud. His profession of "I only want immortality for myself because I want it for everyone, and I happen to be among those" is hard to take seriously, I hope he values his own preferences slightly above some random stranger's. Stalled because his script disappeared ;-)

Brin "I deny being an atheist, because I'm too contrary." (???) He also used the word troglodyte. Randomly defends the Bible (see what you did?), saying that the hate-filled part is mostly the Book of Revelations. Goes from caloric restriction, and a supposed century-lifespan limitation to uploading. Right, because stem-cells, gene therapy et al don't offer some, um, in-between solution?

(20 minutes in)

Says the danger may be "immortal lords", plutocracies of bad governance by the few who can afford immortality, proposes such an "attractor-trap" as a solution to the Fermi paradox. "Probably sucks in most aliens." That's that, then. Immortality as a big and vexing problem, stabilizing autocracies since the rulers cling to their thrones by not dying.

You allow for some selfishness in your motives for wanting immortality -- yay!

Brin about the dangers of the Hutterites outbreeding the enlightened world. First 5 generations on a colony world should rut like rabbits (advice from his wife).

EY found his notes and is back in the discussion (yay!). "In the debate to what extent atheism should accomodate theology (...) I naturally am the guy who's more extreme than Richard Dawkins". Step aside, Dick (Richard)! Good comparison of death and smallpox: we learned to understand there's no silver lining to smallpox, and to fight it. Time to do the same with death. Evolution isn't out to be moral, we don't die because the universe thinks that's, like, meaningful. We're free to disagree with dying and and to fight it. Even if it did kill Stalin (good guy death). Good counterpoint to the "immortal dictator" argument: countries don't need to naturally die and could potentially go on "forever", yet we don't propose to redraw countries on purpose every couple decades because of fearing hegemony. Tries to delineate feasability from desirability.

(30 minutes in)

PZ Myers acknowledges that some egoism is ok, such as him seeking more health care as he grows older. Brin coughs. PZ ignores it. Alludes to the problem of the commons. Comes back to "we should appreciate how biology actually works". Lack of imagination, Sir! Those dying cells can be optimized. PZ predicts that the first society with longer living members would be quickly destroyed, since it would be too static to compete with the more dynamic systems of the blessedly fast dying.

You point out that by less people dying, you need to train less replacements. $$$$ saved!

Brin goes full personalisation fallacy: "a burden of proof lies on those who would deny that biology knew what it was doing". (EY probably wants to tell him evolution is a blind idiot god.) Says that "replacing the bad of nature" is dangerous, the system must not be perturbed. Each new generation must be tabula rasas (tabulas rasa) (sic (sic)), programming themselves. Whatever that means. Boomers should die (they are the best argument that generations need to 'go away') because "they are sanctimonious junkies full of self-righteous indignation". Wants to stress intelligence augmentation instead. Goes back to the religion of St. Paul of Tarsus. Disagrees with "there are no possible interpretations" which see the silver-lining approach to death as something positive, which noone ever claimed in this podcast.

(40 minutes in)

PZ says new individuals may be important to generate new novelty. Says there should be a sweetspot of novelty, stability, and life span. Doesn't want to maximize life span over the other parameters at any costs. (As if anyone ever did that, other than a relative majority of terminal patients in our current society.)

EY points out the focus has been overly placed on societal downsides. That those aren't insurmountable, intrinsic problems, and thus not good reasons to ultimately object to really long lifespans. Takes on that some people confuse "longer life" with "longer life as an aging and ill old person", which is why the term "health span extension" (over life span extension) may be more clarifying. Talks about upgrading the brain. Should upgrade his mic first. Points out that it's absurd to assume you'll still have your exact same old biological cells at ten thousand years. Says a higher degree of neuroplasticity could also be maintained. Doesn't see an exponential increase in technology. Random shoutout to Metamed (if I heard correctly). Implies the urgent need to shape our future because we're not guaranteed to get a positive one anyways. Bottom line is that longevity is desirable if there is some plan in place to address the societal concerns, which should be possible.

(50 minutes)

PZ points out brains are dynamic systems. In case we've missed it the first couple times. "Is it the same person when I'm uploaded." Suggests that kids are already a route to immortality, and not inferior to e.g. uploading immortality. Clearly he puts a somewhat longer timeline on the actuarial escape velocity ;-). Points out the impact would be revolutionary, and not necessarily in the positive sense. That not a lot of our memes may survive such a transition, and not really a whole lot of "us", if we include our culture.

Brin, upon hearing your 5-minute warning, starts with his shoutouts, gotta recommend some books, eh? Summarizes the debate. Reneges on his earlier point on "nature's wisdom", saying there is none, but there's still some kind of "nature's adaptation", which we should ponder with a very serious face and take very seriously, indeed.

Your closing statement, "death doesn't solve any of those aforementioned problems (of hegemony etc.) very well anyways".

PZ's closing statement says death is an intrinsic component of how we work. Immortality would end life as we know it, a "radical transformation". Doesn't want to become a butterfly.

EY's closing statement wants to get away from an "individual versus society" type of framing of the debate, instead immortality should be perceived as potentially desirable for society as a whole.

Comment author: gwern 30 June 2013 02:38:53AM *  33 points [-]

Or alternately, somewhere in the literally thousands and thousands of predictions or claims (I have ~200 in just my personal collection which is nowhere comprehensive) spread across the 20k MoR reviews on FF.net, the >5k comments on LW, the 3650 subscribers of the MoR subreddit, the TvTropes discussions etc etc, someone got something right.

You know perfectly well that one does not get to preach about a single right prediction. He had the opportunity to make more than that prediction, and he failed to take it.

Comment author: gwern 14 June 2013 03:25:59AM *  39 points [-]

I thought you were exaggerating there, but I looked it up in my copy and he really did say that: pg684-686:

To conclude this Chapter, I would like to present ten "Questions and Speculations" about AI. I would not make so bold as to call them "Answers" - these are my personal opinions. They may well change in some ways, as I learn more and as AI develops more...

Question: Will there be chess programs that can beat anyone?

Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. "Do you want to play chess?" "No, I'm bored with chess. Let's talk about poetry." That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity - that is, a programmed ability to "jump out of the system", so to speak - at least roughly to the extent that we have that ability. Once that is present, you can't contain the program; it's gone beyond that certain critical point, and you just have to face the facts of what you've wrought.

I wonder if he did change his opinion on computer chess before Deep Blue and how long before? I found two relevant bits by him, but they don't really answer the question except they sound largely like excuse-making to my ears and like he was still fairly surprised it happened even as it was happening; from February 1996:

Several cognitive scientists said Deep Blue's victory in the opening game of the recent match told more about chess than about intelligence. "It was a watershed event, but it doesn't have to do with computers becoming intelligent," said Douglas Hofstadter, a professor of computer science at Indiana University and author of several books about human intelligence, including "Godel, Escher, Bach," which won a Pulitzer Prize in 1980, with its witty argument about the connecting threads of intellect in various fields of expression. "They're just overtaking humans in certain intellectual activities that we thought required intelligence. My God, I used to think chess required thought. Now, I realize it doesn't. It doesn't mean Kasparov isn't a deep thinker, just that you can bypass deep thinking in playing chess, the way you can fly without flapping your wings."...In "Godel, Escher, Bach" he held chess-playing to be a creative endeavor with the unrestrained threshold of excellence that pertains to arts like musical composition or literature. Now, he says, the computer gains of the last decade have persuaded him that chess is not as lofty an intellectual endeavor as music and writing; they require a soul. "I think chess is cerebral and intellectual," he said, "but it doesn't have deep emotional qualities to it, mortality, resignation, joy, all the things that music deals with. I'd put poetry and literature up there, too. If music or literature were created at an artistic level by a computer, I would feel this is a terrible thing."

And from January 2007:

Kelly said to me, "Doug, why did you not talk about the singularity and things like that in your book?" And I said, "Frankly, because it sort of disgusts me, but also because I just don't want to deal with science-fiction scenarios." I'm not talking about what's going to happen someday in the future; I'm not talking about decades or thousands of years in the future...And I don't have any real predictions as to when or if this is going to come about. I think there's some chance that some of what these people are saying is going to come about. When, I don't know. I wouldn't have predicted myself that the world chess champion would be defeated by a rather boring kind of chess program architecture, but it doesn't matter, it still did it. Nor would I have expected that a car would drive itself across the Nevada desert using laser rangefinders and television cameras and GPS and fancy computer programs. I wouldn't have guessed that that was going to happen when it happened. It's happening a little faster than I would have thought, and it does suggest that there may be some truth to the idea that Moore's Law [predicting a steady increase in computing power per unit cost] and all these other things are allowing us to develop things that have some things in common with our minds. I don't see anything yet that really resembles a human mind whatsoever. The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself than to a human mind, and certainly the computer program that plays chess doesn't have any intelligence or anything like human thoughts.

Comment author: David_Gerard 01 February 2013 10:25:24AM *  29 points [-]

Be sure to screen shot any comment you make that you want to preserve, or comments by others that should be preserved. LessWrong is now the sort of site where critical comments silently vanish that cannot by any sane stretch be called trolling.

If your concern is public relations, systematically deleting critique is amongst the stupidest things I can think of you doing. This is the Internet, where that sort of behaviour ensures preservation. A bot to automatically preserve all comments to LW would be ridiculously simple, for example, if MIRI could no longer be trusted to be honest.

Really, MIRI. Just what the hell do you think you're achieving with this?

Comment author: SilasBarta 08 September 2012 04:36:41AM *  33 points [-]

I'm never a fan of "don't"-oriented guides to social interaction. In my experience, the reason people do things that are taken as creepy is that they don't know a better way -- if they did, wouldn't they do that and thus avoid alienating everyone in the first place?

Giving more "don'ts" doesn't solve that problem: it just makes it harder to locate the space of socially-optimal behavior. What's worse, being extremely restrictive in the social risks you take itself can be taken as creepy! ("Gee, this guy never seems to start conversations with anyone...")

These guides should instead say what to do, not what not to do, that will make the group more comfortable around you.

Edit: Take this one in particular. 90% is "don'ts", 5% is stuff of questionable relevance to the archetypal target of these guides (the problem is that male nerds announce their sexual fetishes too early? really?), and the last 5% is the usual vague "be higher status" advice which, if it were as easy as suggested, would have obviated the need for this advice in the first place.

(To its credit, it has a link to more general social adeptness advice that I didn't read, but then that article, if useful, should be the one linked, not this one.)

Comment author: Viliam_Bur 11 June 2012 04:52:15PM *  38 points [-]

Intellectual insularity is because we don't en masse read other sources. So we can't discuss them. Sure, good books are mentioned in the post, but that didn't create a collective action. What could?

Proposal: At the beginning of the month, let's choose and announce a "book of the month". At the end of the month, we will discuss the book. (During the month, discussing the book should probably be forbidden, to avoid spoilers and discouraging people who haven't read it yet.)

Have we grown as a website? I don't know -- what metric do you use? I guess the number of members / comments / articles is growing, but that's not exactly what we want. So, what exactly do we want? First step could be to specify the goal. Maybe it could be the articles -- we could try to create more high-quality articles that would be very relevant to science and rationality, but also accessible for a random visitor. Seems like the "Main" part of the site is here for this goal, except that it also contains things like "Meetups" and "Rationality Quotes".

Proposal: Refactor LW into more categories. I am not sure how exactly, but the current "Main" and "Discussion" categories feel rather unnatural. (Are they supposed to simply mean: higher importance / lower importance?) A quick idea: Announcements for information about SIAI and upcoming meetups; Forum for repeating topics (open discussion, rationality quotes, media thread, group diary); Top Articles for high-voted articles, and Articles for the remaining articles. In this view, our metric could be to have enough "Top Articles", though of course having more meetups is also great.

Also, why are Eliezer's articles so good? He chose one topic and gradually developed it. It was not "hit and run" blogging, but more like teaching lessons at school. Only later, another topic. That's why his articles literally make sequences; most other articles don't.

Proposal: We could choose one topic to educate other people about, such as mathematics or statistics or programming, and write a series of articles on this topic. (This can be also done by one person.) It is important to have more articles in sequence, a smooth learning curve, so they don't overwhelm the layman immediately.

The common factor to all three proposals is: some coordinated action is necessary. When LW was Eliezer's blog, he did not need to coordinate with himself, but he was making some strategic decisions. To continue LW less chaotically, we would need either a "second Eliezer" (for example Luke wrote a sequence), or a method to make group decisions. Group coordination is generally a difficult problem -- it can be done, but we shouldn't expect it to happen automatically. (One possible solution could be to pay someone to write another sequence.)

In response to Off to Alice Springs
Comment author: MileyCyrus 17 May 2012 01:21:30AM 39 points [-]

I've been in Alice Springs for a couple of weeks now, and the completion is pretty stiff. None of the bars are hiring, and they prefer locals anyway. Contacted the parks nearby, they're all full. I did find a job that pays $26/hr (after super), but I'm only getting 20 hours a week. (Plus another 15 minutes of unpaid labor before every shift). There's a good chance I'll be fired in a couple weeks too.

And a native Australian told me it's only going to get worse. The NT winter holidays are about to start, meaning that all the locals in school right now are going to want a job. I've noticed more backpackers coming by here too.

Anyone can PM me if they want details.

Comment author: Xachariah 28 March 2012 06:04:10PM *  35 points [-]

Idea: Making the money back will be much more difficult than most people anticipate, including Harry.

Reason: Many wizards are highly motivated towards finance and would exhaust every opportunity to generate infinite gold. The rich wizards of the Wizengamot considered 100,000 galleons to be a lot of money.

First, imagine all the ways a wizard could make effectively infinite amounts of muggle money. Arbitrage. Use a time turner and win at the stock market. Use a time turner and win the super-lotto. Imperius (or love potion, false memory charm, groundhog day attack, etc) any billionaire and take part of their fortune. Mind trick some bankers with fake documents (as Dumbledore does in book 6). Go rob some banks with invisibility and teleportation (and/or a time turner). Use magic to secure a job with a 50 million dollar golden parachute with very generous terms. Make huge amounts of drug money as a courier via teleportation/portkey. Sell 5 galleon trinkets to muggle collectors for millions of dollars each. Etc., etc., etc..

Some of them are more risky, some of them are less risky, but I bet that any member of these forums could get at least $50 million in a week if we were wizards.

And yet, when they mention a price of 100,000 galleons people are shocked. The reaction does not look like it's 1/15th of a week's worth of effort he's got to worry about. Dumbledore views it as a major problem that Harry is 60,000 galleons in debt. We know from chapter 70 that it's a known thing that witches and wizards will trick a muggle with a love potion and rape them. Yet nobody thinks to slip Bill Gates a love potion, convince him to part with $2 billion, and blow Lucius out of the water with 100 million galleons. And these are among the most financially motivated people in all of wizardry, not the common population, who consider 2 million pounds as more than weekend spending money. I notice I am confused.

I'll brainstorm some possible explanations:

  • Gringotts won't mint your gold for a nominal fee: Griphook could have been lying, mistaken, or omitted something. Maybe you bring in a ton of gold and they just laugh at it for not having a special magical signature. Unlikely but possible.

  • Gold isn't available to purchase with muggle money: Wizards could own the gold exchanges and gold mines. They do nominal trading for electronics and jewelry, but the vast share of gold goes to the wizarding world. Possible, but it would drastically change the face of the real world (eg World Reserves would be a lie, and Ron Paul is a wizard).

  • The Department of Magical Law Enforcement is way more effective than I imagine: They can find and intervene in not only all cases of magic misuse (eg imperius or bank robberies), but check other means like love potions. Seems unlikely, considering the current crime investigation and how the last war went. Result - Arbitrage and stock/lottery manipulation work.

  • The wizarding world is full of complete inverse-omega class idiots: Always a good theory. But it doesn't sound right for the entirety of the wizarding world (including a ton of muggle-born) to act so completely stupid.

  • The financial tycoons on Wizengamot actually do this: Maybe most of the Wizengamot fortunes exist due to questionable sources. That would explain the majority of evil people doing the voting. Still, that doesn't explain the reaction to the 100,000 galleons.

  • The people who would do this are not on the Wizengamot: Maybe this does happen. Perhaps all the muggle-born realize how easy it is to live a life of luxury in the muggle world and do exactly that, and only venture into the magical world when the want to go shopping. They have the best conveniences of both worlds and none of the dangers of either. This... actually sounds kinda plausible. Plus, there isn't a great job market for muggle-born.

Something doesn't add up. The Wizengamot is full of bright, ambitious people, most of whom have dedicated their lives to finance (makes 4 unlikely). If they're arguing over lucrative ink importation rights it means they've already figured out arbitrage. They wouldn't worry about importing ink, if they weren't leveraging different prices between the market where they're purchasing ink and the market where they're selling ink. Something as simple as triangle arbitrage should be figured out immediately. If wizards already discovered arbitrage, but they don't try and arbitrage in the muggle markets directly, it would be evidence that 1 or 2 is in play. 3 and 5 are already unlikely, so I guess 1&2 or 6 make sense.

I'd be interested to see if Harry actually manages to make infinite money, and if so what it means about the world.

Comment author: [deleted] 25 January 2012 10:25:52PM *  35 points [-]

If a "Shit Rationalists Say" thread would result in net positive utility, I want to believe that a "Shit Rationalists Say" thread would result in net positive utility.

If a "Shit Rationalists Say" thread would not result in net positive utility, I want to believe that a "Shit Rationalists Say" thread would not result in net positive utility.

Let me not become attached to beliefs I may not want.

Comment author: XiXiDu 20 January 2012 10:46:56AM *  34 points [-]

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

I am the wrong person to ask if a "a doctorate in AI would be negatively useful". I guess it is technically useful. And I am pretty sure that it is wrong to say that others are "not remotely close to the rationality standards of Less Wrong". That's of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.

But that's besides the point. Those statements are clearly false when it comes to public relations.

If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.

Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don't think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.

It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won't be enough to say, "I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards." Because at the point that you utter the word "Singularity" you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.

Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a "research fellow of the Singularity Institute"? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn't write that line myself, it's actually from an email conversation with a top-notch person that didn't give me their permission to publish it). In any case, you won't make them listen to you, let alone do what you want.

Compare the following:

Eliezer Yudkowsky, research fellow of the Singularity Institute.

Education: -

Professional Experience: -

Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.

vs.

Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.

Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.

Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.

Awards and Honors: He holds various awards and is listed in the Who's Who in computer science.

Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn't have enough time for that.

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.

Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.

Comment author: XiXiDu 19 January 2012 10:08:26AM *  38 points [-]

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...

Even Greg Egan managed to copublish papers on arxiv.org :-)

ETA

Here is what John Baez thinks about Greg Egan (science fiction author):

He's incredibly smart, and whenever I work with him I feel like I'm a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!

That's actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?

Comment author: Alicorn 14 May 2011 05:59:26AM 36 points [-]

I want the world to be saved.

Comment author: ata 22 April 2011 05:38:55AM *  39 points [-]

This makes it sound more like a cult rather than a group of rational people working together.

...they "grew long mustaches which they would twirl with melodramatic flair as they savaged a programmer's code", for god's sake. This is just a group of people who decided to have fun with their identities, go about their jobs in a bit more theatrical a manner than usual, and make people's days more surreal, and managed to get their work done more effectively and more enjoyably in the process. (Rational doesn't mean boring.) I'm sort of used to random things in nearby memespace regions being accused of being cults, but this doesn't even seem to have the surface similarities that are usually brought up to support those accusations.

Comment author: [deleted] 21 April 2011 01:28:02AM 36 points [-]

"Facts that need pointing out, although they are plain on inspection" is a not-too-inaccurate paraphrase of the definition "class NP problems" in computer science. You aren't describing a failure of rationality, but a very basic limitation of knowledge generally: it's harder to solve a problem (how can I toast marshmallows in my studio) than to verify that a proposed solution works (put them in the oven).

Comment author: whpearson 03 March 2011 03:07:19AM *  38 points [-]

I've been wondering if that the "can't get crap done" malaise of the lesswrong community is based in part on its format and feedback system.

I am part of another community (a hackspace) with a similar makeup in members, geeky computery people, and stuff gets done. Hackdays are done, workshops are organised, code is altered, things are created. "What are you working on" is a common question.

The thingiverse and github communities are on-line ones where people do stuff.

So what is the difference? Lesswrong is a talking shop, you are given positive feedback for making a good post or comment. It will attract people that enjoy and are good at discussion. You also might get evaporative cooling, where people that like action go elsewhere.

What makes github or thingiverse different? The base unit of thing that might get people interested in you is a project, something you have created or are in the process of doing.

If anyone is interested in making a community that rewards doing projects in a rationalist frame (maximum effect for the effort), get in contact. I'm currently working my way there very slowly, through an indirect path.

Edit: See here for details http://groups.google.com/group/group-xyz

Comment author: bentarm 08 January 2011 12:05:42PM 37 points [-]

I'm sorry, but you just don't get a Bayes Factor of 10^40 by considering the alleged testimony of people who have been dead for 2000 years. There have to be thousands of things which are many orders of magnitude more likely than this that could have resulted in the testimony being corrupted or simply falsified.

You don't even need to read the article to see that 10^39 is just a silly number, but for those interested, it is obtained by assuming that the probability of each of the disciples believing in the Resurrection is independent of the probabilities for the other disciples. Despite the fact that the independence assumption is clearly nonsense, and they themselves describe it as a "first approximation", they then go on to quote this 10^39 figure throughout the rest of the article, and in the interview.

I'm sorry, but it's this section where the paper just starts to get silly.

One hypothesis that need not detain us for long is that the disciples themselves did not believe what they were proclaiming, that they were neither more nor less than frauds engaging in an elaborate conspiracy.

Well, ok, that does sound pretty unlikely. But is its improbability really even on the order of 10^39? Have the authors actually thought about what 10^39 means?

If you took every single person who has ever lived, and put them in a situation similar to the disciples every second for the entire history of the Universe, you wouldn't even be coming close to 10^39 opportunities for them to make up such an elaborate plot. Are they really suggesting that it's that unlikely?

Comment author: John_Maxwell_IV 22 May 2015 04:54:38AM *  37 points [-]

Thanks for sharing your contrarian views, both with this post and with your previous posts. Part of me is disappointed that you didn't write more... it feels like you have several posts' worth of objections to Less Wrong here, and at times you are just vaguely gesturing towards a larger body of objections you have towards some popular LW position. I wouldn't mind seeing those objections fleshed out in to long, well-researched posts. Of course you aren't obliged to put in the time & effort to write more posts, but it might be worth your time to fix specific flaws you see in the LW community given that it consists of many smart people interested in maximizing their positive impact on the far future.

I'll preface this by stating some points of general agreement:

  • I haven't bothered to read the quantum physics sequence (I figure if I want to take the time to learn that topic, I'll learn from someone who researches it full-time).

  • I'm annoyed by the fact that the sequences in practice seem to constitute a relatively static document that doesn't get updated in response to critiques people have written up. I think it's worth reading them with a grain of salt for that reason. (I'm also annoyed by the fact that they are extremely wordy and mostly without citation. Given the choice of getting LWers to either read the sequences or read Thinking Fast and Slow, I would prefer they read the latter; it's a fantastic book, and thoroughly backed up by citations. No intellectually serious person should go without reading it IMO, and it's definitely a better return on time. Caveat: I personally haven't read the sequences through and through, although I've read lots of individual posts, some of which were quite insightful. Also, there is surprisingly little overlap between the two works and it's likely worthwhile to read both.)

And here are some points of disagreement :P

You talk about how Less Wrong encourages the mistake of reasoning by analogy. I searched for "site:lesswrong.com reasoning by analogy" on Google and came up with these 4 posts: 1, 2, 3, 4. Posts 1, 2, and 4 argue against reasoning by analogy, while post 3 claims the situation is a bit more nuanced. In this comment here, I argue that reasoning by analogy is a bit like taking the outside view: analogous phenomena can be considered part of the same (weak) reference class. So...

  • Insofar as there is an explicit "LW consensus" about whether reasoning by analogy is a good idea, it seems like you've diagnosed it incorrectly (although maybe there are implicit cultural norms that go against professed best practices).

  • It seems useful to know the answer to questions like "how valuable are analogies", and the discussions I linked to above seem like discussions that might help you answer that question. These discussions are on LW.

  • Finally, it seems you've been unable to escape a certain amount of reasoning by analogy in your post. You state that experimental investigation of asteroid impacts was useful, so by analogy, experimental investigation of AI risks should be useful.

The steelman of this argument would be something like "experimentally, we find that investigators who take experimental approaches tend to do better than those who take theoretical approaches". But first, this isn't obviously true... mathematicians, for instance, have found theoretical approaches to be more powerful. (I'd guess that the developer of Bitcoin took a theoretical rather than an empirical approach to creating a secure cryptocurrency.) And second, I'd say that even this argument is analogy-like in its structure, since the reference class of "people investigating things" seems sufficiently weak to start pushing in to analogy territory. See my above point about how reasoning by analogy at its best is reasoning from a weak reference class. (Do people think this is worth a toplevel post?)

This brings me to what I think is my most fundamental point of disagreement with you. Viewed from a distance, your argument goes something like "Philosophy is a waste of time! Resolve your disagreements experimentally! There's no need for all this theorizing!" And my rejoinder would be: Resolving disagreements experimentally is great... when it's possible. We'd love to do a randomized controlled trial of whether universes with a Machine Intelligence Research Institute are more likely to have a positive singularity, but that unfortunately we don't currently know how to do that.

There are a few issues with too much emphasis of experimentation over theory. The first issue is that you may be tempted to prefer experimentation over theory even for problems that theory is better suited for (e.g. empirically testing prime number conjectures). The second issue is that you may fall prey to the streetlight effect and prioritize areas of investigation that look tractable from an experimental point of view, ignoring questions that are both very important and not very tractable experimentally.

You write:

Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems.

This would seem to depend on the specifics of the agent in question. This seems like a potentially interesting line of inquiry. My impression is that MIRI thinks most possible AGI architectures wouldn't meet its standards for safety, so given that their ideal architecture is so safety-constrained, they're focused on developing the safety stuff first before working on constructing thought models etc. This seems like a pretty reasonable approach for an organization with limited resources, if it is in fact MIRI's approach. But I could believe that value could be added by looking at lots of budding AGI architectures and trying to figure out how one might make them safer on the margin.

We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself.

Sure... but note that Eliezer Yudkowsky from MIRI was the one who invented the AI box experiment and ran the first few experiments, and FHI wrote this paper consisting of a bunch of ideas for what AI boxes consist of. (The other thing I didn't mention as a weakness of empiricism is that empiricism doesn't tell you what hypotheses might be useful to test. Knowing what hypotheses to test is especially nice to know when testing hypotheses is expensive.)

I could believe that there are fruitful lines of experimental inquiry that are neglected in the AI safety space. Overall it looks kinda like crypto to me in the sense that theoretical investigation seems more likely to pan out. But I'm supportive of people thinking hard about specific useful experiments that someone could run. (You could survey all the claims in Bostrom's Superintelligence and try to estimate what fraction could be cheaply tested experimentally. Remember that just because a claim can't be tested experimentally doesn't mean it's not an important claim worth thinking about...)

Comment author: Danny_Hintze 28 January 2015 12:42:29AM 38 points [-]

Donated $200

Comment author: James_Miller 27 January 2015 04:50:03PM 38 points [-]

Donated $100.

Comment author: jessicat 11 December 2014 10:37:03PM *  38 points [-]

Transcript:

Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?

Thiel: I'm super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it'll take people's jobs, it'll replace people's jobs, but I think it's much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn't be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it's unlikely to happen any time soon, so I don't worry about it as much, but it's one of these tail risk things, and it's probably the one area of technology that I think would be worrisome, because I don't think we have a clue as to how to make it friendly or not.

Comment author: thakil 21 November 2014 08:55:27AM 38 points [-]

So I'm going to say this here rather than anywhere else, but I think Eliezer's approach to this has been completely wrong headed. His response has always come tinged with a hint of outrage and upset. He may even be right to be that upset and angry about the internet's reaction to this, but I don't think it looks good! From a PR perspective, I would personally stick with an amused tone. Something like:

"Hi, Eliezer here. Yeah, that whole thing was kind of a mess! I over-reacted, everyone else over-reacted to my over-reaction... just urgh. To clear things up, no, I didn't take the whole basilisk thing seriously, but some members did and got upset about it, I got upset, it all got a bit messy. It wasn't my or anyone else's best day, but we all have bad moments on the internet. Sadly the thing about being moderately internet famous is your silly over reactions get captured in carbonite forever! I have done/ written lots of more sensible things since then, which you can check out over at less wrong :)"

Obviously not exactly that, but I think that kind of tone would come across a lot more persuasively than the angry hectoring tone currently adopted whenever this subject comes up.

Comment author: MaximumLiberty 16 September 2014 02:38:58AM 37 points [-]

[Please read the OP before voting. Special voting rules apply.]

As a first approximation, people get what they deserve in life. Then add the random effects of luck.

Max L.

Comment author: orthonormal 05 September 2014 08:39:36PM 38 points [-]

Do not tempt Eliezer to make the last chapter of HPMOR available only in the event of a positive Singularity.

Comment author: Will_Newsome 08 July 2014 04:53:10AM *  35 points [-]

Sorry for being unclear. I meant that any subculture that is allergic to parody of itself is just inviting less fair and less jocular criticism. Eliezer has already greatly damaged LessWrong's reputation by making it seem cultish. Making comments about how people are sensitive to appearances of cultishness and thus it's good for parody of that alleged cultishness to be banned, is just sowing the wind. I think that there are many interesting and independent intellectuals on LessWrong and I don't want them to be tarred as discreditable cultists. And that's why I would like it to be known that LessWrong is capable of self-parody and isn't going to pathetically grasp at credibility it never had in the first place.

In response to Against Open Threads
Comment author: blacktrance 30 May 2014 06:24:22PM 38 points [-]

I think LW's degradation is primarily in Main (interesting Main posts are rare these days), and has nothing to do with Open Threads. If anything, Open Threads help LW because they make community participation easier with a lower barrier to entry for posting.

Comment author: Brillyant 13 March 2014 02:39:21PM 37 points [-]

Irrationality Game: Less Wrong is simply my Tyler Durden—a disassociated digital personality concocted by my unconcious mind to be everything I need it to be to cope with Camusian absurdist reality. 95%.

Comment author: palladias 19 February 2014 03:48:49PM 37 points [-]

A simple reframe that helped jumpstart my creativity:

My cookie dough froze in the fridge, so I couldn't pry it out of the bowl to carry with me to bake at a party. I tried to get it out, but didn't succeed, and had basically resigned myself to schlepping the bowl on the metro.

But then I paused and posed the question to myself: "If something important depended on me getting this dough out, what would I try?"

I immediately covered the top of the bowl, ran the base under lukewarm to warm water, popped it out, wrapped it up, and went on my way.

Comment author: fubarobfusco 06 January 2014 03:49:13PM *  31 points [-]

People with strong political identities usually have their maps systematically distorted.

Oh, certainly. Feminism points out, though, that the social mainstream is also a strong political identity which systematically distorts people's maps. They use somewhat unfortunate historical words for this effect, like "patriarchy". That's just a label on their maps, though; calling a stream a creek doesn't change the water.

So combining this with your guideline, we should be careful not to invite anyone who has a strong political identity ... but we cannot do that, because "ordinary guy" (and "normal woman") is a strong political identity too. It's just a strong political identity one of whose tenets is that it is not a strong political identity.

We don't have the freedom to set out with an undistorted map, nor of having a perfect guide as to whose maps are more distorted. Being wrong doesn't feel like being wrong. A false belief doesn't feel like a false belief. If you start with ignorance priors and have a different life, you do not end up with the same posteriors. And as a consequence, meeting someone who has different data from you can feel like meeting someone who is just plain wrong about a lot of things!


Also ... I wonder what a person whose maps of the social world were really "no better than random" would look like. I think he or she would be vastly more unfortunate than a paranoid schizophrenic. He or she would certainly be grossly unable to function in society, lacking any ability to model or predict other people. As a result, he or she would probably have no friends, job, or political allies. Lacking the ability to work with other people at all, he or she would certainly not look like a member of any political movement.

As such, I have to consider that when applied to someone who clearly does not have these attributes, that expression is being used as merely a crude insult, akin to calling someone a "drooling moron" or "mental incompetent" because they disagree with you.

Comment author: Lumifer 13 August 2013 07:46:27PM *  37 points [-]

Hm. Let me throw out several points in random order:

-- I don't think LW is a "general-interest" forum. Not even "relatively". However that's fine -- there are really no such things as general-interest forums because their lack of focus kills them. What you have, actually, is online communities some of which spend their time chatting about whatever in the general section of their forums. But that general section is just for overflow, the community itself is formed and kept together by something that binds much tighter than general interest.

-- If I rephrase your post along the lines of "LW is a web-based club for smart people. How do we get more smart people to join our club?" -- would you object?

-- Size matters. In particular, online communities have certain optimal size for cohesiveness -- be too small and it's just a few old-timers making inside jokes; grow too big and you drown in a cacophony of noise. I've seen online communities mutate into something quite different from the original through massive growth. That may be fine in the grand scheme of things, but the original character is lost.

-- While attracting "elite" how are you going to get rid of hoi polloi? If people arrive, set up camp in LW, and start discussing Jennifer Anniston's butt and what a horrible hangover did they have today after being gloriously trashed yesterday, what are you going to do about it?

-- There is correlation between "being highly successful in real life" and "being able to avoid wasting time chattering away on the 'net".

-- I think I would support some additional granularity to this site (subreddit style), especially if we get some population growth. Nothing like Reddit itself, of course, but the existence of two parts and two parts only seems to be an artifact from the olden days (when you went to school up the hill both ways).

-- And finally, the important question: what do you want to achieve? Is it just having more smart people around to talk to, or there's more? In particular, with Pinky and the Brain flavour?

Comment author: mare-of-night 30 June 2013 03:36:17PM 38 points [-]

"One of my classmates gets bitten by a horrible monster, and as I scrabble frantically in my mokeskin pouch for something that could help her, she looks at me sadly and with her last breath says, 'Why weren't you prepared?' And then she dies, and I know as her eyes close that she won't ever forgive me -"

Comment author: gwern 21 May 2013 09:15:10PM *  37 points [-]

Factors why I have not and probably will not:

  1. Soylent costs more than my current diet, limiting gains
  2. it is a priori highly likely to fail since we know for a fact that severe nutrition deficiencies can be due to subtle & misunderstood factors (see: the forgetting of scurvy cures) and that nutrition is one of the least reliable scientific areas
  3. his work is even more likely than that to have problems because he hasn't consulted the existing work on food replacements (yes, it's a thing; how exactly do you think people in comas or with broken jaws get fed?)
  4. given #2, the negative effects are likely to be subtle and long-term means that on basic statistical power grounds, you'll want long and well-powered self-experiments to go from 'crappy self-experiment' to 'good self-experiment'*
  5. given the low odds of success (#2-3), the expensive powerful self-experiments necessary to shift our original expectations substantially due to long-term effects and subtlety (#4), and the small benefits (#1), the VoI is low here
  6. my other self-experiments, in progress and planned, suffer from many fewer of Soylent's defects, hence have reasonable VoIs (Specifically: I am or will be investigating Noopept, melatonin, magnesium l-threonate & citrate, coluracetam, meditation, Redshift, and lithium orotate.)
  7. VoI current/planned self-experiments (#6) > VoI Soylent cloning/tweaking (#5)
  8. hence, the opportunity cost of Soylent is higher than not, so I will continue my existing plans

* although see my reply to Qiaochu, at this point Rob isn't even at the 'crappy' level

EDIT: as of June 2015, I would amend my list of complaints to de-emphasize #3 as it seems that Soylent Inc has revised the formulation a number of times, run it by some experts, and has now been field-tested to some degree; most of my self-experiments in #6 have since finished (right now the only relevant ones are another magnesium self-experiment, trying to find the right dosage, and nonrandomized bacopa ABA quasiexperiment); and for point #1, between increasing the protein in my diet and official Soylent lowering prices, now Soylent is more like 2x my current food expenditures than 3x+.

Comment author: gwern 21 May 2013 08:07:35PM *  37 points [-]

To copy over my earlier G+ comment:

ಠ_ಠ I completely disapprove of this. Soylent is a fun idea, sure, but Rhinehart's asking for $100k to launch a Soylent manufacturing company?! He hasn't even done even the minimal crappy self-experiments he could've done very easily, like randomize weeks on and off Soylent! Nor, AFAIK, has he published any of the results from the early volunteers or anything, really. This is ridiculous.

See also Hacker News

Comment author: Qiaochu_Yuan 05 January 2013 11:49:01PM *  37 points [-]

My summary / take: believing arguments if you're below a certain level of rationality makes you susceptible to bad epistemic luck. Status quo bias inoculates you against this. This seems closely related to Reason as memetic immune disorder.

In response to Just One Sentence
Comment author: WrongBot 05 January 2013 02:11:56AM 36 points [-]

"If you perform experiments to determine the physical laws of our universe, you will learn how to make powerful weapons."

It's all about incentives.

Comment author: Tenoke 24 December 2012 01:19:44AM *  35 points [-]

That censorship because of what people think of LessWrong is ridiculous. That the negative effect on the reputation is probably significantly less than what is assumed. And that if EY thought that censorship of content for the sake of LW's image is in order he should've logically thought that omitting fetishes from his public OKCupid profile(for the record I've defended the view that this is his right) among other things is also in order as well. And some other thoughts of this kind.

Comment author: Yvain 09 December 2012 07:03:45AM *  36 points [-]

Sex education (including non-abstinence) may not work at all, and if it does work it works only in a very weak and limited way.

Eating cholesterol doesn't cause high blood cholesterol. Eating saturated fat probably doesn't cause higher blood cholesterol. High blood cholesterol levels are protective against cancer and the mortality gain here probably outweighs any mortality loss from cardiovascular disease. The entire science of cholesterol is confused and terrible and practically every statement you have ever heard that includes the word "cholesterol" is very likely a lie. (link to a readable blog post with some of this, but you can also find it all in big-name medical journals)

The (good!) effect of drinking alcohol on life expectancy is super strong. Drinking wine a few times a week is correlated with up to four years gain in lifespan (effect mostly found in the middle-aged, might not be such a good idea in young), and people who are smart and understand that correlation isn't always causation have amassed some decent evidence that at least some of this might be causal.

Labeling the amount of calories in food (for example on McDonald's restaurant menus) totally fails to change people's eating behaviors at all, no matter how hard people study it. Article here, credit to SarahC for pointing this out to me.

I hate all of these facts and wish they were not true, which makes me a credible source for them.

Comment author: Alicorn 09 December 2012 12:20:22AM 29 points [-]

So far, I've been more annoyed on LessWrong by people reacting to fear of "cultural erosion" than by any extant symptoms of same.

Comment author: Steve_Rayhawk 17 November 2012 04:27:37AM *  37 points [-]

The main way complexity of this sort would be addressable is if the intellectual artifact that you tried to prove things about were simpler than the process that you meant the artifact to unfold into. For example, the mathematical specification of AIXI is pretty simple, even though the hypotheses that AIXI would (in principle) invent upon exposure to any given environment would mostly be complex. Or for a more concrete example, the Gallina kernel of the Coq proof engine is small and was verified to be correct using other proof tools, while most of the complexity of Coq is in built-up layers of proof search strategies which don't need to themselves be verified, as the proofs they generate are checked by Gallina.

Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos [...]

Yes, any physical system could be subverted with a sufficiently unfavorable environment. You wouldn't want to prove perfection. The thing you would want to prove would be more along the lines of, "will this system become at least somewhere around as capable of recovering from any disturbances, and of going on to achieve a good result, as it would be if its designers had thought specifically about what to do in case of each possible disturbance?". (Ideally, this category of "designers" would also sort of bleed over in a principled way into the category of "moral constituency", as in CEV.) Which, in turn, would require a proof of something along the lines of "the process is highly likely to make it to the point where it knows enough about its designers to be able to mostly duplicate their hypothetical reasoning about what it should do, without anything going terribly wrong".

We don't know what an appropriate formalization of something like that would look like. But there is reason for considerable hope that such a formalization could be found, and that this formalization would be sufficiently simple that an implementation of it could be checked. This is because a few other aspects of decision-making which were previously mysterious, and which could only be discussed qualitatively, have had powerful and simple core mathematical descriptions discovered for cases where simplifying modeling assumptions perfectly apply. Shannon information was discovered for the informal notion of surprise (with the assumption of independent identically distributed symbols from a known distribution). Bayesian decision theory was discovered for the informal notion of rationality (with assumptions like perfect deliberation and side-effect-free cognition). And Solomonoff induction was discovered for the informal notion of Occam's razor (with assumptions like a halting oracle and a taken-for-granted choice of universal machine). These simple conceptual cores can then be used to motivate and evaluate less-simple approximations for situations where where the assumptions about the decision-maker don't perfectly apply. For the AI safety problem, the informal notions (for which the mathematical core descriptions would need to be discovered) would be a bit more complex -- like the "how to figure out what my designers would want to do in this case" idea above. Also, you'd have to formalize something like our informal notion of how to generate and evaluate approximations, because approximations are more complex than the ideals they approximate, and you wouldn't want to need to directly verify the safety of any more approximations than you had to. (But note that, for reasons related to Rice's theorem, you can't (and therefore shouldn't want to) lay down universally perfect rules for approximation in any finite system.)

Two other related points are discussed in this presentation: the idea that a digital computer is a nearly deterministic environment, which makes safety engineering easier for the stages before the AI is trying to influence the environment outside the computer, and the idea that you can design an AI in such a way that you can tell what goal it will at least try to achieve even if you don't know what it will do to achieve that goal. Presumably, the better your formal understanding of what it would mean to "at least try to achieve a goal", the better you would be at spotting and designing to handle situations that might make a given AI start trying to do something else.

(Also: Can you offer some feedback as to what features of the site would have helped you sooner be aware that there were arguments behind the positions that you felt were being asserted blindly in a vacuum? The "things can be surprisingly formalizable, here are some examples" argument can be found in lukeprog's "Open Problems Related to the Singularity" draft and the later "So You Want to Save the World", though the argument is very short and hard to recognize the significance of if you don't already know most of the mathematical formalisms mentioned. A backup "you shouldn't just assume that there's no way to make this work" argument is in "Artificial Intelligence as a Positive and Negative Factor in Global Risk", pp 12-13.)

what will prevent them from becoming "bad guys" when they wield this much power

That's a problem where successful/practically applicable formalizations are harder to hope for, so it's been harder for people to find things to say about it that pass the threshold of being plausible conceptual progress instead of being noisy verbal flailing. See the related "How can we ensure that a Friendly AI team will be sane enough?". But it's not like people aren't thinking about the problem.

Comment author: startling 10 November 2012 03:16:41AM *  17 points [-]

I've been reading the sequences and the front-page posts for about six months and participating in the irc channel for a little bit less time than that, but I haven't made an account until now. I offer my apologies in advance if this is in the wrong place. My intuition says this would do better as its own post, but alas, I do not have the necessary karma.

I should mention that there's some hateful (specifically transphobic) content later in this post. If you think you'll be upset by this, you might want to stop reading here.

So, #lesswrong is kind of an unfriendly place. I've been calling attention to racist and sexist remarks when I see them and can work up the nerve, but it could be a lot better. I'd paste some examples of these, but I don't save logs from all conversations. It's not uncommon to do so, so I'm sure someone has the ability to grep a few choice words and come up with some examples. I should also mention that I'm white and male, so I probably don't notice a lot of hate that I should.

I'm queer, though, and I identify tentatively as genderqueer, so I noticed this:

[Tue Nov 6 2012]
<Algo> ivan: Someone just told me... "well... having their food
labeled as GMO makes them uncomfortable like having sex with a
trans person"
<Algo> >.<
[18:10]
<startling> whaaat?
<Namegduf> That seems pretty plausible.
<Namegduf> Not particularly backed intuitive dislike.
<Namegduf> I mean, conditional on uncomfortability of both.
<gwern> Algo: makes sense. both are unnatural and deceptive
<Algo> gwern: Both are?
[18:13]
<gwern> Algo: yeah, one is a monstrous abortion pretending to be
its opposite and deluding the eye thanks to the latest scientific
techniques, and the other is a weird fruit
<startling> gwern, "deceptive" is a pretty terrible word to use
for trans people.
<startling> gwern, what a disgusting thing to say.
[18:14]
<gwern> startling: more or less disgusting than a GMO fruit rotting for a
week?
<gwern> inquiring minds need to know!

Go back and read the whole thing, if you haven't; specifically, I'm talking about gwern's messages, not Algo's.

And then, today, there was this:

<Grognor> also all of my anger toward drethelin is completely gone
[20:54]
<startling> gwern, so it is like do notation!
<Grognor> as well as toward everyone else
<gwern> Grognor: what, because you got a free book?
<Grognor> no.
<gwern> you had your 'nads surgically removed?
<Grognor> yes, that's exactly what happened.
[20:55]
<startling> electroshock therapy?
*** nshepperd (~asdfg@70.218.233.220.static.exetel.com.au) has quit: Ping
timeout: 276 seconds
[20:56]
<gwern> startling: maybe he started estrogen supplementation
<startling> gwern, okay?
<gwern> startling: we won't judge him for it. well, maybe you
won't, I find trannies really creepy

Note that we hadn't been talking about this since the previous post; gwern was going out of his way to provoke me.

I'm not sure what to do about this, especially since gwern is a well-respected member of LessWrong. I'm curious how the community feels about this. It obviously needs to be addressed, at the very least to stay within the bounds of freenode policy:

In accordance with UK law freenode and the PDPC have no tolerance for any activity which could be construed as:

  • incitement to racial hatred
  • incitement to religious hatred

or any other behaviour meant to deliberately bring upon a person harassment, alarm or distress. We do NOT tolerate discrimination on the grounds of race, religion, gender, sexual preference or other lifestyle choices and run with a zero-tolerance policy for libel and defamation.

While we believe in the concept of freedom of thought and freedom of expression, freenode does not operate on the basis of absolute freedom of speech and we impose limitations eg. on "hate speech".

N.B. I've edited this post to fix the links; markdown reference-style links like [link][] apparently do not work.

I've also gone back and edited out the unrelated statements of people who wanted me to; I may do that again on request.

Comment author: Yvain 18 September 2012 10:28:23AM *  36 points [-]

If you are at all interested in rationality it would be a huge shame for you to skip the Sequences.

Yes, a lot of the material in the Sequences could also be obtained by reading very very carefully a few hundred impenetrable scholarly books that most people have never heard of in five or ten different disciplines, supplemented by a few journal articles, plus some additional insights by "reading between the lines", plus drawing all the necessary connections between them. But you will not do this.

The Sequences condense all that information, put it in a really fun, really fascinating format, and transfer all of it into the deepest levels of your brain in a way that those hundred books wouldn't. And then there's some really valuable new material. Luke and Eliezer can argue whether the new material is 30% of the Sequences or 60% of the Sequences, but either number is still way more output than most people will produce over their entire lives.

If your worry is that they will just be recapitulating things you already know, I am pretty doubtful; I don't know your exact knowledge level, but they were pretty exciting for me when I first read them and I had college degrees in philosophy and psychology which are pretty much the subjects covered. And if they are new to you, then from a "whether you should read them" point of view it doesn't matter if Eliezer copied them verbatim off Wikipedia.

Seriously. Read the Sequences. Luke, who is the one arguing against their originality above, says that they are the one book he would like to save if there was an apocalypse. I would have to think a long time before saying the same but they're certainly up there.

Also, as a fellow doctor interested in utiltiarianism/efficient charity, I enjoyed your blog and associated links.

Comment author: Kevin 26 July 2012 01:56:13PM *  37 points [-]

This was a private party announced via a semi-public list. A reporter showed up and she talked to people without telling them she was a reporter. This is not a report, it is a tabloid piece. Intentional gossip.

Comment author: RobertLumley 20 July 2012 06:00:30PM *  31 points [-]

In other news, over 91,000 people have died since midnight EST.

Comment author: pragmatist 07 June 2012 11:27:40PM *  38 points [-]

Here's an attempted reconstruction of Mills' argument. I'm not endorsing this argument (although there are parts of it with which I sympathize), but I think it is a lot better than the case for Mills as you present it in your post:

If a friend asked me whether she should vote in the upcoming Presidential election, I would advise her not to. It would be an inconvenience, and the chance of her vote making a difference to the outcome in my state is minuscule. From a consequentialist point of view, there is a good argument that it would be (mildly) unethical for her to vote, given the non-negligible cost and the negligible benefit. So if I were her personal ethical adviser, I would advise her not to vote. This analysis applies not just to my friend, but to most people in my state. So I might conclude that I would encourage significant good if I launched a large-scale state-wide media blitz discouraging voter turn-out. But this would be a bad idea! What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.

80k strongly encourages professional philanthropism over political activism, based on an individualist analysis. Any individual's chance of making a difference as an activist is small, much smaller than his chance of making a difference as a professional philanthropist. Directed at individuals, this might be sound ethical advice. But the message has pernicious consequences when directed at the aggregate, as 80k intends.

It is possible for political activism to move society towards a fundamental systemic change that would massively reduce global injustice and suffering. However, this requires a cadre of dedicated activists. Replaceability does not hold of political activism; if one morally serious and engaged activist is lured away from activism, it depletes the cadre. Now any single activist leaving (or not joining) the cadre will not significantly affect the chances of revolution succeeding. But if there is a message in the zeitgeist that discourages political participation, instead encouraging potential revolutionaries to participate in the capitalist system, this can significantly impact the chance of revolutionary success. So 80k's message is dangerous If enough motivated and passionate young people are convinced by their argument.

It's sort of like an n-person prisoner's dilemma, where each individual's (ethically) dominant strategy is to defect (conform with the capitalist system and be a philanthropist), but the Nash equilibrium is not the Pareto optimum. This kind of analysis is not uncommon in the Marxist literature. Analytic Marxists (like Jon Elster) interpret class consciousness as a stage of development at which individuals regard their strategy in a game as representative of the strategy of everyone in their socio-economic class. This changes the game so that certain strategies which would otherwise be individually attractive but which lead to unfortunate consequences if adopted in the aggregate are rendered individually unattractive. [It's been a while since I've read this stuff, so I may be misremembering, but this is what I recall.]

Comment author: ArisKatsaris 23 March 2012 02:48:41AM *  34 points [-]

I'm pretty sure the solution is as follows (I've already posted it in TV tropes forum). I'm ROT13, if anyone still wants to figure it out: Yhpvhf Znysbl pynvzrq gb unir orra haqre Vzcrevhf ol Ibyqrzbeg. Ibyqrzbeg jnf qrsrngrq ol Uneel Cbggre. Sebz Serq & Trbetr'f cenax jr xabj gung xvyyvat gur jvmneq gung unf lbh haqre gur Vzcrevhf phefr perngrf n qrog. Erfhyg: Yhpvhf Znysbl naq rirel bgure Qrngu rngre pynvzvat gb unir orra vzcrevbfrq ner abj haqre yvsr qrog gb Uneel Cbggre. Ur pna fgneg erqrrzvat.

Comment author: Nick_Tarleton 25 January 2012 11:41:06PM *  38 points [-]

"Many (and probably most) animals also have gender in the sense that individuals with penises behave in certain ways, and individuals with ovaries behave in other ways, despite not having memes." It would be surprising if H. sapiens were very different.

(The obviousness-in-retrospect of this argument, stated so straightforwardly, combined with the fact that I almost never hear it stated so straightforwardly and never thought of it myself, makes me update towards culture being able to non-obviously derange debates like this to a really high degree. Far mode isn't naturally about truth.)

Comment author: Alicorn 25 January 2012 11:39:31PM 36 points [-]

I think I'm done. If I think of any more I'll add them to this comment instead of making a new one.

"How do you operationalize that?"

"'Snow is white' is true if and only if snow is white."

"If I may generalize from one example here..."

"I'm suffering from halo effect."

"Warning: Dark Arts."

"Okay, but in the Least Convenient Possible World..."

"We want to raise the sanity waterline."

"You've fallen prey to the illusion of transparency."

"Bought some warm fuzzies today."

"What does the outside view say?"

"So the idea is that we make all scientific knowledge a sacred and closely guarded secret, so it will be treated with the reverence it deserves!"

"How could you test that belief?"

Comment author: fubarobfusco 19 January 2012 02:43:20AM 37 points [-]

What evidence have you? Lots of New Age practitioners claim that New Age practices work for them. Scientology does not allow members to claim levels of advancement until they attest to "wins".

For my part, the single biggest influence that "their brand of rationality" (i.e. the Sequences) has had on me may very well be that I now know how to effectively disengage from dictionary arguments.

Comment author: turchin 11 December 2011 10:01:41PM *  38 points [-]

What are you doing about FAI?

Comment author: XiXiDu 07 November 2011 10:43:04AM 37 points [-]

What is each member of the SIAI currently doing and how is it related to friendly AI research?

Comment author: Vladimir_Nesov 29 September 2011 10:33:19AM *  38 points [-]

Let's not call shoes we like "rationalist shoes".

Edit: (Original title of the post was "Rationalist Video Game: Frozen Synapse".)

Comment author: [deleted] 18 May 2011 06:50:13AM 37 points [-]

This offers food for thought about various anti-aging strategies. For example, given the superexponential growth in mortality, if we had a magic medical treatment that could cut your mortality risk in half but didn't affect the growth of said risk, then that would buy you very little late in life, but might extend life by decades if administered at a very young age.

This isn't an anti-aging strategy, but it is an anti-death strategy: low-dose aspirin. As explained in this New York Times article on December 6, 2010, "researchers examined the cancer death rates of 25,570 patients who had participated in eight different randomized controlled trials of aspirin that ended up to 20 years earlier".

Eight. Different. Randomized. Controlled. Trials. Twenty-five thousand people.

They found (read the article) that low-dose aspirin dramatically decreased the risk of death from solid tumor cancers. Again, this ("risk of death") is the gold standard - many studies measure outcomes indirectly (e.g. tumor size, cholesterol level, etc.) which leads to unpleasant surprises (X shrinks tumors but doesn't keep people alive, Y lowers cholesterol levels but doesn't keep people alive, etc.). Best of all is this behavior: "the participants in the longest lasting trials had the most drastic reductions in cancer death years later."

Not mentioned in the article is the fact that aspirin is an ancient drug, in use for over a century with side effects that, while they certainly exist, are very well understood. This isn't like the people taking "life-extension regimens" or "nootropic stacks", who are, as far as I'm concerned, finding innovative ways to poison themselves.

Yet the article went on to say this:

But even as some experts hailed the new study as a breakthrough, others urged caution, warning people not to start a regimen of aspirin without first consulting a doctor about the potential risks, including gastrointestinal bleeding and bleeding in the brain (hemorrhagic strokes).

“Many people may wonder if they should start taking daily aspirin, but it would be premature to recommend people starting taking aspirin specifically to prevent cancer,” said Eric J. Jacobs, an epidemiologist with the American Cancer Society.

I'm a programmer, not a doctor - but after looking around, I concluded that the risks of GI bleeding were not guaranteed fatal, and the risks of hemorrhagic strokes were low in absolute terms. Also, aspirin is famously effective against ischemic strokes. According to Wikipedia: "Although aspirin also raises the risk of hemorrhagic stroke and other major bleeds by about twofold, these events are rare, and the balance of aspirin's effects is positive. Thus, in secondary prevention trials, aspirin reduced the overall mortality by about a tenth."

So unless aspirin's risks are far more grave than I've currently been led to believe, as far as I'm concerned, people saying "hey, even if you're not subject to aspirin's well-known contraindications, you shouldn't start low-dose aspirin just yet" are literally statistically killing people. Cancer is pretty lethal and we're not really good at fixing it yet, so when we find something that can really reduce the risk (and there aren't many - the only other ones I can think of are the magical substances known as not-smoking and avoiding-massive-doses-of-ionizing radiation), we should be all over that like cats on yarn.

I make damn sure to take my low-dose aspirin every day. I started it before reading this article on the advice of my doctor who thought my cholesterol was a little high - I'm almost 28, so it'll have many years in which to work its currently poorly understood magic.

That said, this reduces the risk of one common cause of death (two or three if you throw in heart attacks and ischemic strokes). There are lots of others out there. Even if you could avoid all of them (including the scariest one, Alzheimer's - it's insanely common, we have no fucking clue what causes it or how to stop it, and it annihilates your very self - even if cryonics is ultimately successful, advanced Alzheimer's is probably the true death), humans pretty clearly wear out with an upper bound of 120 years. Maybe caloric restriction can adjust that somewhat. But I think I'll sign up for cryonics sooner rather than later - I'm in favor of upgrading probability from "definitely boned" to "probably boned but maybe not".

Comment author: WikiLisks 11 December 2010 11:44:39PM 37 points [-]

WikiLisks is a non-karma media organization dedicated to bringing important existential risks to the public. We provide an innovative, secure, and acausal way for independent AIs around the multiverse to provide information about their utility functions.

Comment author: Leonhart 10 December 2010 01:56:47PM *  34 points [-]

I'm curious.

I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here

Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.

(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)

Comment author: Alicorn 01 December 2010 07:09:11PM 35 points [-]

It's saddening to me that people who don't fit in their assigned gender have to defend their typicality relative to the other gender, and cisfolk basically don't. Someone who was just like me mentally, but was born with male genitals and brought up as a male, would probably report this kind of discomfort; there are plenty of ways in which I am non-stereotypical. And yet as a person physically and socially female from birth, I don't stick out like a sore thumb; people (least of all me) do not seem to wonder if maybe I'm really a guy on the inside; no one wonders if I'm overcompensating for something should I spin around in a twirly skirt. I'm within tolerances for my assigned gender, basically. It is an unfairly distributed cis privilege that I have, that this is all the analysis anyone requires of me.

Lucidfox, you sound like you are within tolerances for femininity. Be welcome. Help yourself to your name and your pronouns and whatever bodily interventions are medically available to you.

In response to Crazy Ideas Thread
Comment author: James_Miller 08 July 2015 06:10:34AM 36 points [-]

To any future super-intelligence reading this after my death: I very much want to be brought back to life.

Comment author: RichardKennaway 18 May 2015 02:46:54PM 36 points [-]

That they notice when they are arguing about definitions, and not do that.

Comment author: Daniel_Burfoot 27 January 2015 08:34:30PM 37 points [-]

Donated $100.

Comment author: James_Miller 18 December 2014 04:54:10PM 37 points [-]

True story:

My son resisted cleaning up his toys but loved beating me at games. Once when he was three I took advantage of his competitive spirit by dividing his blocks into two piles, assigning one pile to him and the other to myself and then telling my son that we will race to see who puts away all of his blocks first. My son smiled, indicating that he was going to play my game, making me proud of my parenting skills.

At the contest’s start my son grabbed a bunch of my blocks, ran outside of his room and threw the blocks down our stairs. When he returned I laughed so hard that the game ended.

My son recently joined LessWrong.

In response to Jokes Thread
Comment author: solipsist 25 July 2014 02:43:47AM *  36 points [-]

Three logicians walk into a bar. Bartender asks "Do you all want a drink?". The first says "I don't know", the second says "I don't know", and the third says "yes".

Comment author: Yvain 22 July 2014 05:00:45PM *  37 points [-]

LW is not at risk anytime soon of falling in love with politics, but it is at risk of appearing arrogant, dismissive, insulting, thoughtlessly-opposed-to-local-politics-and-groupcraft, etc.

This might be the crux of our disagreement.

I don't have statistics for Less Wrong, but here are some for SSC. The topic is "median number of page views for different types of post throughout 2014".

As you can see, interest in charity and statistics is the lowest, followed by interest in transhumanism and rationality. Politics is the highest of the group that clusters around the 3000s. Then comes "race and gender" at 8000, and "things i will regret writing" (my tag for very controversial political rants that will make a lot of people very angry) at 16000, ie about five times the level for rationality or transhumanism.

This seems to correspond to how things work on Less Wrong, where for example a basic introduction of misogyny and mansplaining got almost twice as many comments as Anna's massive and brilliant post resolving a bunch of philosophy of mind issues and more than three times as many as Luke's heavily researched primer on fighting procrastination.

Not to mention that disaster with Eugene was politically based. I'm pretty sure nobody mass-downvotes because someone else disagrees with them about GiveWell.

Less Wrong is massively at risk of falling in love with politics. Politics is much more interesting and attention-sucking than working on important foundational questions, and as soon as we relax the taboo on it we are doomed. On the other hand, most of the people who say we're "arrogant" will find a reason to think so no matter how we phrase things. I mean, what happens when they're okay with our pithy slogan on politics, look at the site, and figure out what we actually believe?

That having been said, if you've been doing a lot of public relations work and empirically find a lot of people are turned off by the way "politics is the mind-killer" is used in practice, I can't tell you you're wrong. I just hope that however you choose to push the same idea doesn't result in a sudden influx of people who think politics is great and are anxious to prove they're capable of "hard mode".

Comment author: pianoforte611 14 July 2014 12:45:44AM 34 points [-]

Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.

Also, there are lots of people, many of them graduate students and PhD's themselves, who will try to convince you not to do a PhD. Its not an unusual position.

Comment author: shminux 03 July 2014 06:00:39PM *  26 points [-]

I seem to be the lone dissenter here, but I am unhappy about the ban. Not that it is unjustified, it definitely is. However, it does not address the main issue (until jackk fiddles with karma): preventing Eugine from mass downvoting. So this is mainly retribution, rather than remediation, which seems anti-rational to me, if emotionally satisfying, as one of the victims.

Imagine for a moment that Eugine did not engage in mass downvoting. He would be a valuable regular on this site. I recall dozens of insightful comments he made (and dozens of poor ones, of course, but who am I to point fingers), and I only stopped engaging him in the comments after his mass-downvoting habits were brought to light for the first time. So, I would rather see him exposed and dekarmified, but allowed to participate.

TL;DR: banning is a wrong decision, should have been exposed and stripped of the ability of downvote instead. Optionally, all his votes ever could have been reversed, unless it's hard.

EDIT: apparently not the lone dissenter, just the first to speak up.

Comment author: gwern 24 June 2014 01:49:09AM *  37 points [-]

EDIT: I've removed this draft & posted a longer version incorporating some of the feedback here at http://lesswrong.com/lw/khd/confound_it_correlation_is_usually_not_causation/

Comment author: NoSuchPlace 12 March 2014 11:27:04PM *  34 points [-]

Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.

Comment author: Kawoomba 18 July 2013 02:06:44PM *  33 points [-]

What an ending that would be: Harry uses the Self-Indication Assumption to conclude that he is most probably a character in a Muggle story about magic, then manages to 'blackmail' the author into granting him godhood in order to stop Harry from committing suicide in a literarily unsatisfying fashion, since the author would prefer the former as an ending over the latter. Someone would object that Harry doesn't have agency? But he does, if the author takes the character seriously and continues with a high-fidelity in-character continuation. If Harry found out he's likely a character in a novel, he'd be right, and there's no reason he shouldn't use that to his advantage.

Talk about writing yourself into a corner! :)

EDIT:

"It's not going to work!" Harry was shouting at the top of his lungs, his wand pointed at his own temple. He gripped it so tightly his knuckles were white. "You wrote me this way, you know there's no going back from this point, you knew I'd find out eventually."

Ominous clouds were forming, a harsh wind picking up, making Harry shiver from where he stood in the Hogwarts central yard. Students stood aghast, staring at the screaming boy pointing the wand at himself.

"And you can stop with the weather charade, if I do this -- and I will, your book will be ruined, those little touches of drama aren't going to fool anyone!"

"Harry." The cold voice of the Defense Professor carried over the noise of the wind effortlessly, yet seemed spoken with little effort. "I have new ... " Quirrell pointedly glanced at the students present. "... information regarding your quest. I don't know what you are trying to achieve with this, but it may be best we go inside and ..."

Harry scoffed -- actually scoffed at his mysterious ancient wizard. "Silence, Mr. Intriguing-Plot-Point! I shouldn't even be talking to you, not any more! I won't be distracted, this ends now, one way or another. Which ending will you be writing, Mr. Not-Quite-Omniscient Author? None of your characters will make me back out of doing what's right! If you write this novel, I will make sure it's an utopia, or I'll ruin it!"

"Harry!" A wide-eyed Dumbledore appeared as if out of nowhere next to Quirrell. "Harry, my dear boy! Stop this madness! This is not what the hero is supposed to -" "You know, Headmaster, I don't blame you for your failures, for not seeing the obvious. That, too was part of the plot. You see," Harry's voice was dripping with condescension. "You weren't supposed to be endowed with enough agency, or to be taken seriously enough, to actually come to the right conclusion, and to cut your own strings. You remain the puppet ... with all due respect."

"Harry Potter. You and I have unfinished business!" Draco Malfoy, his father's spitting image, strut upon the courtyard. Harry didn't even look at him. Draco stared incredulously as Harry looked up at the sky, and continued to shout against the wind. "That must be some sort of movie allusion. Getting desperate, Mr. Author, aren't we? You won't salvage this ending, no matter how hard you contrive to. This show is over, no more diversions. It's time to end this!"

The wind stopped abruptly. Harry turned around, to find the courtyard empty. Only a certain twitch of his eyes betrayed that he had come to a decision, that he was preparing to act. The wand pressed ever harder against his temple, he opened his mouth to speak the final words. Had he overreached with his precommitment? I can only go forward, if I stop, I'm lost.

"Harry, oh Harry." A soft female voice said from behind him.

He knew that voice.

Comment author: Viliam_Bur 11 April 2013 02:17:29PM *  34 points [-]

I noticed that in many descriptions of violence against women, it is emphasised that given person is a normal male. I feel this requires deeper analysis than just saying "I agree" or "I disagree and I feel offended". Different people may translate these words completely differently, so let's think about which translations are correct and which are not.

To make this discussion shorter, let's ignore the part that also women can be violent, only let's only focus on what "normal male" means in this context. Here are a few possible translations. Actually, I just pick two extreme ones, and anyone is welcome to add other options (because I don't want to generate too many strawpersons).

  • A man can be abusive towards his wife/girlfriend/random girl even if he is not a psychopath, even if he is very nice and polite towards all his friends and strangers, if he is a good student, productive in his job, or a Nobel price winner. Towards a specific person in a specific relationship, his behavior may be completely different.

  • Deep in their hearts, all men desire to torture women. Some of them are just too afraid of legal consequences.

Let's say that I agree with the first version, disagree with the second version... and I am never sure which version a person had in her mind when she uses these words without further explanation. A principle of charity points towards the first explanation, but I know there are people believing the second version too. So I would prefer if people communicated more clearly.

Comment author: ModusPonies 07 March 2013 03:21:23PM 37 points [-]

If a complete stranger or an acquaintance can do something useful for you, ask. (Politely. At a convenient time. With an appropriate amount of honest flattery.) If they say no, don't press them.

Failure case: make someone else feel important. Success case: get a favor, maybe make a connection.

Comment author: lukeprog 12 January 2013 07:33:49PM *  37 points [-]

One of my favorite quotes of his, from Fix the machine, not the person:

[When a] system isn’t working, it doesn’t make sense to just yell at the people in it — any more than you’d try to fix a machine by yelling at the gears. True, sometimes you have the wrong gears and need to replace them, but more often you’re just using them in the wrong way. When there’s a problem, you shouldn’t get angry with the gears — you should fix the machine.

...You can’t force other people to change. You can, however, change just about everything else. And usually, that’s enough.

View more: Prev | Next