Filter All time

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Alicorn 31 August 2011 06:23:42AM *  42 points [-]

I have only skimmed your post, but now feel motivated to leave feedback as requested. It is possible that some of my objections are misplaced, addressed somewhere in the depths of this article that my eyes glazedly passed over. In fact, my first complaint is that:

  • it is too long. LW tolerates long articles under limited circumstances and this doesn't meet any of them (you're not an established poster, don't have fifty footnotes with sources, don't apologize off the bat for length, and have missed many obvious opportunities for compression/excision). You should have made it much shorter (500 words about what the hell Direct Instruction consists of) or much much shorter (a two-sentence blurb with a link to more information).

  • It's sales-y. Full of applause lights (counted five instances of the string "rational" in your text). You claim that your intent is to pique interest, but that is not done by saying "This thing is interesting! This thing is interesting!" repeatedly in the local idiom.

  • It is badly structured. Rambles all over the place. If you laid out the contents of your article in conceptspace and made me walk from point to point in the order you present them, my feet would get tired and I would become dizzy. You have definitely not convinced me that you have learned a secret of how to teach things, on a meta as well as object level.

  • It makes you look like a crank. If DI needs this much fluff and meandering and enthusiastic pitching, it's probably not interesting. Oops.

In fact, the only reason I am bothering to think about this article ever again, having successfully scrolled all the way down to the unnecessary signature, is that you do repeatedly ask for feedback. If you're sincere about that: I invite you to post, as a reply to this comment, a 1-3 sentence description of what DI is, plus one sentence about whatever evidence (beyond your enthusiasm about it) which exists for its splendidness. (Last sentence but not the first 1-3 can be/consist primarily of linkage.)

Comment author: [deleted] 19 May 2011 01:16:06PM 42 points [-]

I'd prefer more posts that aim to teach something the author knows a lot about, as opposed to an insight somebody just thought of. Even something less immediately related to rationality -- I'd love, say, posts on science, or how-to posts, at the epistemic standard of LessWrong. Also I'd prefer more "Show LessWrong" project-based posts.

In response to Consequentialism FAQ
Comment author: PlaidX 26 April 2011 02:40:58AM 39 points [-]

I like it, but stop using "ey". For god's sake, just use "they".

In response to Rationalist Hobbies
Comment author: Manfred 18 February 2011 12:35:31AM *  40 points [-]

Hitting yourself on the head with a rock - Works on important physical skills, similar benefits to martial arts. Helps you overcome limitations. Reinforces actual reality of macro-scale objects, even when quantum mechanics is true. Teaches you about the difference between good and bad ideas.

Comment author: David_Gerard 01 December 2010 09:49:58PM *  39 points [-]

I like RationalWiki and try to write reasonable stuff there. (Particularly pleased with this one, which I just got to a silver rating.) That said, it really rubs some people up the wrong way. It's a skeptics' wiki and snarky as hell. It's made of flaws and the flaws are people.

The important things to remember are:

  • RationalWiki is a silly wiki of no import in the wider world. It doesn't pretend otherwise. It's mostly read by its writers.
  • The only reason anyone on LessWrong has noticed its existence is that it's one of the extremely few places that anyone has bothered writing about LessWrong.
  • The LessWrong article on RW is doing the job EY pointed out it would, i.e. luring us over here.
  • Taking something seriously just because it pays you attention may not be a good idea.
Comment author: Coscott 22 September 2014 09:49:32PM 42 points [-]

After hearing the idea, I believe that it is not at all dangerous. However, I think the general strategy of being more cautious than you think you have to be whenever you think you have a dangerous idea is a good one. If shminux's comment made you feel any negative emotions associated with being too cautious, I would like to cancel those out by applauding your choice to err on the side of caution.

Comment author: bramflakes 15 September 2014 11:41:13AM *  39 points [-]

Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.

EDIT: I should clarify:

Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.

Comment author: wedrifid 19 July 2014 12:19:33PM 42 points [-]

Is there anything we should do?

  • Meet 10 new people (over a moderately challenging personal specific timeframe).
  • Express gratitude or appreciation.
  • Work close to where we live.
  • Have new experiences.
  • Get regular exercise.

ie. No. Nothing about this article comes remotely close to changing the highest expected value actions for the majority of the class 'we'. If it happens that there is a person in that class for whom this opens an opportunity to create (more expected) value then it is comparatively unlikely that that person is the kind who would benefit from "we shoulding" exhortations.

Comment author: ialdabaoth 03 July 2014 12:42:41PM 39 points [-]

Huh. So I WASN'T paranoid.

That's actually a good feeling.

Comment author: Kaj_Sotala 02 July 2014 04:31:38AM 40 points [-]

It looks like the person who has been downvoting you is the same person mentioned in this thread. Follow-up queries also indicated that the same person had been downvoting several others who had previously complained of downvote stalking.

The said person failed to respond to my first private message on the subject; because there's the chance that they might have just missed it, I finally got around sending them another message yesterday, explicitly mentioning the possibility of a ban unless they provide a very good explanation within a reasonable time. I apologize for taking so long - I procrastinated on this for a while, as I find it quite uncomfortable to initiate conflict with people.

Comment author: Kaj_Sotala 06 January 2014 09:44:01AM *  39 points [-]

I'm not sure that anything should be done about it, at least if we look at it from whole society's perspective. (Or rather, we should try to avoid the echo chamber effect if possible, but not at the cost of reducing dispassionate discussion.) If some places discuss sensitive issues dispassionately, then those places risk becoming echo chambers; but if no place does so, then there won't be any place for dispassionate discussion of those issues. I have a hard time believing that a policy that led to some issue only being discussed in emotionally charged terms would be a net good for society.

Comment author: leplen 13 August 2013 11:21:23PM *  42 points [-]

At the risk of sounding ridiculous, I will self-identify as a member of the intellectual elite since no one else seems to want to.

I'm occasionally engaged in LW and I'm interested in rationality and applied psychology and the idea of FAI.

I don't think LW is necessarily the best venue for discussing big important ideas. Making a post on the internet is something I might spend 4-5 working hours on. It might even be something I'll spend a couple days on, but that's an inconsequential amount of my time. And the vast majority of the people who read whatever post I generate will spend generously 15-20 minutes thinking about it. I'm actively working on reading and checking the math in a 300 page textbook in order to make a post on LW six months from now that maybe 100 people will read and almost no one will take seriously. If my day job weren't writing academic papers with similarly dim readership prospects this would surely be overwhelmingly demoralizing. There's a commitment issue here where it doesn't make sense to invest a lot of time in impressing/convincing LW readers. I have no guarantee that anyone is seriously engaged with whatever idea I present here as opposed to just being entertained, and most of the people reading this forum are not looking for things to seriously engage with. There's a limit to how many, how big, and how strange the ideas you encounter once a week in a blog can be. They might be entertaining, they might be interesting, but they can't all change the way you see the world. It takes a lot of time (for my mind at least) to process new ideas and work through all the implications.

LW is set up in such a way that it's a constant stream of updates, and any given post can expect a week or two of attention, at which point it fades into the background with all the other detritus. But big ideas are hard to grapple with in a week, and so most LW responses are the sort of off the cuff suggestions that you get when you expose people to a new idea they don't fully understand. I've been reading LW for 9 months now and I'm still on the fence about FAI. The internet makes publishing much easier, but it doesn't make thinking any easier. This is I think one of the reasons that science hasn't abandoned publishing in journals and why there aren't many elites on the web. Accessing content is already much much easier than digesting that content. I have whole binders full of papers I need to read and digest that I don't have time for. And so does everyone else probably. LW posts are primarily entertainment and most of the people who post here are doing it for a brief applause or to float an idea they haven't seriously worked on yet.

I'm also less clear as to what sort of content you want that you don't have. What's your end goal?

If I had to make code suggestions, I would say that discussions on a single post get too long before anything is resolved. There seems to be no point in commenting once there's a certain number of comments, and so discussion tends to sort of stall out. I'd be interested to see what the distribution of # of comments on high karma posts looks like and whether there's a specific number of comments which seems to function as a sort of glass ceiling. I also think that as time goes on things get pushed down the queue and become invisible. The fact that no matter how brilliant your idea is it's basically got a week in the limelight and then will be forgotten forever isn't super conducive to using LW to seriously discuss difficult problems.

And this is all off the top of my head, because of course I haven't seriously thought about this.

Comment author: ikrase 30 June 2013 10:25:45AM 40 points [-]

Disliking Hanson is not about....

Comment author: Stabilizer 27 June 2013 03:31:57AM 40 points [-]

The concept of "deserve" can be harmful. We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert "deserve" into the future: deserve your luck by exploiting it.

Of course, "deserve" can be a useful social mechanism to increase desired actions. But only within that context.

Comment author: latanius 08 March 2013 12:57:30AM 42 points [-]

If you are trying to do X, surround yourself with people who are also doing X. Takes much less willpower to keep doing it.

Comment author: [deleted] 24 December 2012 12:27:38AM *  40 points [-]

I'm started to feel strongly uncomfortable about this, but I'm unsure if that's reasonable. Here's some arguments ITT that are concerning me:

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states).

Violence is a very slippery concept. Perhaps it is not the best one to base mod rules on. (more at end)

We're losing Graham cred by being unwilling to discuss things that make us look bad.

This one is really disturbing to me. I don't like all the self-conscious talk about how we are percieved outside. Maybe we need to fork LW, to accomplish it, but I want to be able to discuss what's true and good without worrying about getting moderated. My post-rationality opinions have already diverged so far from the mainstream that I feel I can't talk about my interests in polite society. I don't want this here too.

If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.

Would my pro-piracy arguments be covered by this? What about my pro-coup d'etat ones?

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Yeah seriously. What if violence is the right thing to do? (EDIT: Derp. Don't discuss it in public, (except for stuff like Konkvistador's piracy and reaction advocacy, which are supposed to be public))

My post was indeed inappropriate. I have used the "Delete" function on it.

This is important. If the poster in question agrees when it is pointed out that their post is stupid, go ahead and delete it. But if they disagree in some way that isn't simple defiance, please take a long look at why.

In general, two conclusions:

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

And, as usual, that which can be destroyed by the truth should be. If moderator actions start serving some force other than truth and good, LW, or at least the subset dedicated to truth and rationality, should be forked.

Comment author: aelephant 06 July 2012 01:53:10AM 42 points [-]

Encourage curiosity

In the book Brain Rules John Medina writes:

from dinosaurs to atheism

I remember, when I was 3 years old, obtaining a sudden interest in dinosaurs. I had no idea that my mother had been waiting for it. That very day, the house began its transformation into all things Jurassic. And Triassic. And Cretaceous. Pictures of dinosaurs would go up on the wall. I would begin to find books about dinosaurs strewn on the floor and sofas. Mom would even couch dinner as "dinosaur food," and we would spend hours laughing our heads off trying to make dinosaur sounds. And then, suddenly, I would lose interest in dinosaurs, because some friend at school acquired an interest in spaceships and rockets and galaxies. Extraordinarily, my mother was waiting. Just as quickly as my whim changed, the house would begin its transformation from big dinosaurs to Big Bang. The reptilian posters came down, and in their places, planets would begin to hang from the walls. I would find little pictures of satellites in the bathroom. Mom even got "space coins" from bags of potato chips, and I eventually gathered all of them into a collector's book.

This happened over and over again in my childhood. I got an interest in Greek mythology, and she transformed the house into Mount Olympus. My interests careened into geometry, and the house became Euclidean, then cubist. Rocks, airplanes. By the time I was 8 or 9, I was creating my own house transformations.

One day, around age 14, I declared to my mother that I was an atheist. She was a devoutly religious person, and I thought this announcement would crush her. Instead, she said something like, "That's nice, dear," as if I had just declared I no longer liked nachos. The next day, she sat me down by the kitchen table, a wrapped package in her lap. She said calmly, "So, I hear you are now an atheist. Is that true?" I nodded yes, and she smiled. She placed the package in my hands. "The man's name is Friedrich Nietzsche, and the book is called Twilight of the Idols," she said. "If you are going to be an atheist, be the best one out there! Bon appetit!"

I was stunned. But I understood a powerful message: Curiosity itself was the most important thing. And what I was interested in mattered. I have never been able to turn off this fire hose of curiosity.

Comment author: Alicorn 14 June 2012 08:28:38PM *  37 points [-]

If I were to consider the arguments of 20 other groups similar to Christian theologians, I would probably misunderstand them at least 1 time in 20.

Not all arguments which you misunderstand-and-disbelieve are actually sound.

Comment author: lessdazed 26 January 2012 01:12:15AM *  42 points [-]

"Did you just generalize from fictional evidence?"

"You're a one-boxer, right?" (Said with no context.)

"You'd choose specks, right?" (Said with no context.)

"Mysteriousness is not a property of a thing."

"You're running on corrupted hardware."

"Replace the symbol with the substance."

"Could you regenerate that knowledge?"

"Consider a group you feel prejudiced against, frequentists for example."

"But what's the best textbook on that subject?"

"Is that a compartmentalized belief?

"I notice I am confused."

"Of course I have super-powers. Everyone does."

"Beliefs are properly probabilistic."

"Is that your confidence level inside or outside the argument?"

"Did you credibly pre-commit to that rule?"

"That's just what it feels like from the inside."

"Conceptspace is bigger than you imagine."

"No you don't believe you believe that."

"No, money is the unit of caring."

"If that doesn't work out for you, you can still make six figures as a programmer."

"Purpose is not an inherent property."

"You think introspection is reliable?"

"Why didn't you use log-odds?"

Bullshit Rationalists Say:

"My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine."

"I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil."

"Rational..." (used in the title of a post on any topic.)

Shit and Bullshit Rationalists Don't Say:

"You're entitled to your opinion."

"You can't be too skeptical"

"Absence of evidence is not evidence of absence."

"Did you read what Kurzweil wrote about the Singularity?

"100%."

"But was it statistically significant at the p<.05 level?"

"Yeah, I read all the papers cited in lukeprog's latest article."

Comment author: Viliam_Bur 19 January 2012 03:28:22PM 40 points [-]

SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.

So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.

No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?

Comment author: [deleted] 19 January 2012 02:47:19AM 39 points [-]

I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:

CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.

You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.

The Sequences are simply awesome.

You wrote series of esoteric blog posts that some people like.

And he did manage to write the most popular Harry Potter fanfic of all time.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

You have a guy who is pretty smart. Ok...

The point I'm trying to make is, muflax's diagnosis of "lame" isn't far off the mark. There's nothing here with the ability to wow someone who hasn't heard of SIAI before, or to encourage people to not be put off by arguments like the one Eliezer makes in the Q&A.

Comment author: James_Miller 07 November 2011 03:21:35PM *  40 points [-]

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute.

I believe that high-quality research is happening at the Singularity Institute.

James Miller, Associate Professor of Economics, Smith College.

PhD, University of Chicago.

In response to What we're losing
Comment author: Yvain 15 May 2011 04:35:13PM *  42 points [-]

Agreed.

One person at the Paris meetup made the really interesting and AFAICT accurate observation that the more prominent a Less Wrong post was, the less likely it was to be high quality - ie comments are better than Discussion posts are better than Main (with several obvious and honorable exceptions).

I think maybe it has to do with the knowledge that anything displayed prominently is going to have a bunch of really really smart people swarming all over it and critiquing it and making sure you get very embarrassed if any of it is wrong. People avoid posting things they're not sure about, and so the things that get main-ed tend to be restatements of things that create pleasant feelings in everyone reading them without rocking any conceivable boat, and the sort of overly meta- topics you're talking about lend themselves to those restatements - for example "We should all be more willing to try new things!" or "Let's try to be more alert for biases in our everyday life!"

Potential cures include greater willingness to upvote posts that are interesting but non-perfect, greater willingness to express small disagreements in "IAWYC but" form, and greater willingness to downvote posts that are applause lights or don't present non-obvious new material. I'm starting to do this, but hitting that downvote button when there's nothing objectively false or stupid about a post is hard.

Comment author: MathMage 09 March 2015 07:11:41PM 41 points [-]

Sad ending:

All the students agree to take on the position of Defense Professor de facto.

Next year, all the students die.

Comment author: Jonathan_Lee 16 February 2015 12:52:39PM 41 points [-]

tl;dr: The side of rationality during Galileo's time would be to recognise one's confusion and recognise that the models did not yet cash out in terms of a difference in expected experiences. That situation arguably holds until Newton's Principia; prior to that no one has a working physics for the heavens.

The initial heliocentric models weren't more accurate by virtue of being heliocentric; they were better by virtue of having had their parameters updated with an additional 400 years of observational data over the previous best-fit model (the Alfonsine tables from the 1250s). The geometry was similarly complicated; there was still a strong claim that only circular motions could be maintained indefinitely, and so you have to toss 60 or so circular motions in to get the full solar system on either model.

Basically everyone was already using the newer tables as calculational tools, and it had been known from ancient times that you could fix any point you wanted in an epicyclic model and get the same observational results. The dispute was about which object was in fact fixed. Kepler dates to the same time, and will talk about ellipses (and dozens of other potential curves) in place of circular motion from 1610, but he cannot predict where a planet will be efficiently. He's also not exactly a paragon of rationality; astrology and numerology drive most of his system, and he quite literally ascribes his algebraic slips to god.

A brief but important digression into Aristotle is needed; he saw as key that was made was that the motion of the planets is unceasing but changes, whereas all terrestrial motions ceased eventually. He held that circular motions were the only kind of motion that could be sustained indefinitely, and even then, only by a certain special kind of perfect matter. The physics of this matter fundamentally differed from the physics of normal stuff in Aristotle. Roughly and crudely, if it can change then it has to have some kind of dissipative / frictional physics and so will run down.

Against that backdrop, Galileo's key work wasn't the Dialogue, but the Siderius Nuncius. There had been two novae observed in the 40 years prior, and this had been awkward because a whole bunch of (mostly neo-Platonists) were arguing that this showed the heavens changed, which is a problem for Aristotle. Now Galileo shows up and using a device which distorts his vision, he claims to be able to deduce: * There are Mountains on the moon (so that it is not a sphere contra Aristotle) * There are Invisible objects orbiting Jupiter * That the planets show disks * That the Sun has spots, which move across the face and separately change with time * That Venus has phases (which essentially require that it orbit the Sun) * That Saturn has lumps on it (and thus not a sphere -- he's seeing the rings) As an observational program, this is picked up with and deeply explored by loads of people (inc. Jesuits like Riccioli). But to emphasise: Galileo is using a device which distorts his vision and which can only be tested on terrestrial objects and claiming to use it to find out stuff about the heavens, which contemporary physics says is grossly different. Every natural philosopher who's read Aristotle recognises that this kind of procedure hasn't historically been useful.

From a viewpoint which sees a single unified material physics, these observations kill Aristotelian cosmology. You've got at least three centers of circular-ish motion, which means you can't mount the planets on transparent spheres to actually move them around. You have an indication that the Sun might be rotating, and is certainly dynamic. If you kill Aristotle's cosmology, you have to kill most of his physics, and thus a good chunk of his philosophy. That's a problem, because since Aquinas the Catholic church had been deriving theology as a natural consequence of Aristotle in order to secure themselves against various heresies. And now some engineer with pretensions is turning up, distorting his vision and claiming to upend the cart.

What Galileo does not have is a coherent alternative package of physics and cosmology. He claims to be able to show a form of circular inertia from first principles. He claims that this yields a form of relativity in motion which makes it difficult to discern your true motion without reference to the fixed stars. He claims that physics is kinda-sorta universal, based on his experience with cannon (which Aristotelian physics would dismiss because [using modern terminology] experiments where you apply forces yourself are not reproducible and so cannot yield knowledge). This means his physics has real issues explaining dissipative effects. He doesn't have action at a distance, so he can't explain why the planets do their thing (whereas there are physical models of Aristotelian / Ptolemaic models).

He gets into some pro forma trouble over the book, because he doesn't put a disclaimer on it saying that he'll retract it if it's found to be heretical. Which is silly and it gets his knuckles rapped over it. The book is "banned", which means two things, for there are two lists of banned books. One is "burn before reading" and the other is more akin to being in the Restricted Section; Galileo's work is the latter.

Then he's an ass in the Dialogue. Even that would not have been an issue, but at the time he's the court philosopher of the Grand Duke of Tuscany, Cosimo I de' Medici. This guy is a secular problem for the Pope; he has an army, he's not toeing the line, there's a worry that he'll annex the Papal states. So there's a need to pin his ears back, and Galileo is a sufficiently senior member of the court that Cosimo won't ignore his arrest nor will he go to war over it.

So the Inquisition cooks up a charge for political purposes, has him "tortured" (which is supposed to mean they /show/ him the instruments of torture, but they actually forget to), get him to recant (in particular get Cosimo to come beg for his release), and release him to "house arrest" (where he is free to come, go, see whoever, write, etc). The drama is politics, rather than anything epistemological.

As to the disputes you mention, some had been argued through by the ancient Greeks. For example, everyone knew that measurements were imprecise, and so moving the earth merely required that the stars were distant. It was also plain that if you accepted Galileo's observations as being indicative of truth, then Aristotelian gravity was totally dead, because some stuff did not strive to fall (cometary tails were also known to be... problematic).

Now, Riccioli is writing 20 years later, in an environment where heliocentrism has become a definite thing with political and religious connotations, associated to neo-Platonism, anti-Aristotelean, anti-Papal thinking. This is troublesome because it strikes at the foundational philosophy underpinning the Church, and secular rulers in Europe are trying to strategically leverage this. Much like Aquinas, Riccioli's bottom line is /written/ already. He has to mesh this new stack of observational data with something which looks at least somewhat like Aristotle. Descartes is contracted at about the same time to attempt to rederive Catholicism from a new mixed Aristotilean / Platonist basis.

As a corollary, he's being quite careful to list every argument which anyone has made, and every refutation (there's a comparatively short summary here). Most of the arguments presented have counterpoints from the other side, however strained they might seem from a modern view. It's more akin to having 126 phenomena which need to be explained than anything else. They don't touch on the apparently changing nature of the planets (by this point cloud bands on Jupiter could be seen) and restrict themselves mostly to the physics of motion. There's a lot of duplication of the same fundamental point, and it's not a quantitative discussion. There are some "in principle" experiments discussed, but a fair few had been considered by Galileo and calculated to be infeasible (eg. observing 1 inch deflections in cannon shot at 500 yards, when the accuracy is more like a yard).

Obviously Newton basically puts a stop to the whole thing, because (modulo a lack of mechanism) he can give you a calculational tool which spits out Kepler and naturally fixes the center of mass. There are still huge problems; the largest is that even point-like stars appear to have small disks from diffraction, and until you know this you end up thinking every other star has to be larger than the entire solar system. And the apparent madness of a universal law is almost impossible to understate. It's really ahistorical to think that a very modern notion of parsimony in physics could have been applied to Galileo and his contemporaries.

Comment author: Arran_Stirton 28 January 2015 01:56:23PM *  41 points [-]

Donated $180.

I was planning on donating this money, my yearly 'charity donation' budget (it's meager - I'm an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:

  • The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don't quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.

  • A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn't. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.

  • While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR's continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.

  • CFAR isn't a typical EA cause. This means that while if I don't donate to keep AMF going, another EA will. However if I don't donate to keep CFAR going there's a reasonable chance that someone else won't. In other words my donations to CFAR aren't replaceable.

  • To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there's the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR's work over the next decade will save more than 800 lives? the answer becomes a definite yes.

Comment author: Nate_Gabriel 16 October 2014 09:50:41PM 40 points [-]

I once believed that six times one is one.

I don't remember how it came up in conversation, but for whatever reason numbers became relevant and I clearly and directly stated my false belief. It was late, we were driving back from a long hard chess tournament, and I evidently wasn't thinking clearly. I said the words "because of course six times one is one." Everyone thought for a second and someone said "no it's not." Predictable reactions occurred from there.

The reason I like the anecdote is because I reacted exactly the same way I would today if someone corrected me when I said that six times one is six. I thought the person who corrected me must be joking; he knows math and couldn't possibly be wrong about something that obvious. A second person said that he's definitely not joking. I thought back to the sequences, specifically the thing about evidence to convince me I'm wrong about basic arithmetic. I ran through some math terminology in my head: of course six times one is one; any number times one is one. That's what a multiplicative identity means. In my head, it was absolutely clear that 6x1=1, this is required for what I know of math to fit together, and anything else is completely logically impossible.

It probably took a good fifteen seconds from me being called out on it before I got appropriately embarrassed.

This anecdote is now my favorite example of the important lesson that from the inside, being wrong feels exactly like being right.

In response to comment by gjm on Questions on Theism
Comment author: Yvain 09 October 2014 04:25:06AM *  41 points [-]

The Roman Catholic Church has a process -- to its credit, not a completely ridiculous one -- by which it certifies some healings there as miraculous. Although the process isn't completely ridiculous, it's far from obviously bulletproof; the main requirement is that a bunch of Roman Catholic doctors declare that the alleged cure is inexplicable according to current medical knowledge.

I went to medical school in Ireland and briefly rotated under a neurologist there. One time he received a very nice letter from the Catholic Church, saying that one of his patients had gotten much better after praying to a certain holy figure, and the Church was trying to canonize (or beatify, or whatever) the figure, so if the doctor could just certify that the patient's recovery was medically impossible, that would be really helpful and make everyone very happy.

The neurologist wrote back that the patient had multiple sclerosis, a disease which remits for long periods on its own all the time and so there was nothing medically impossible about the incident at all.

I have only vague memories of this, but I think the Church kept pushing it, asking whether maybe it was at least a little medically impossible, because they really wanted to saint this guy.

(the neurologist was an atheist and gleefully refused as colorfully as he could)

This left me less confident in accounts of medical miracles.

Comment author: Viliam_Bur 02 July 2014 04:15:29PM *  38 points [-]

Just to encourage you, I want to put things in context:

  • This is one person that significantly destroys the social capital of the LW community. And in our community, social capital is scarce.

  • They probably do this to promote their political views; to silence perceived political opponents. (Including new users.) This is completely against LW values.

If you'd just block their account without further notice right now, I would say: "Well done!". It is extremely generous to give them a chance to explain themselves; and there probably is no good explanation anyway, so it's just playing for time.

I mean, really, if one person keeps terrorizing the community, and the community is unwilling to defend themselves, then all the lessons about how rationalists are supposed to win have failed.

A person who did so much damage does not deserve a second chance. If you decide to give them the second chance, I won't complain. But I would complain against inaction while they continue to do more damage. If you are the only person who has an access to the "Ban User" button, just press it already, before everyone leaves.

EDIT: This whole thread (and it is far from being the first one) is additional damage caused by a single person. People keep proposing solutions without evidence, then they argue with each other. There is a growing frustration when they realize that most of the proposed changes won't get implemented anyway (either because other people oppose it, or because making changes to LW codebase always takes a lot of time). We keep generating negative emotions, because... why exactly?

Comment author: Metus 10 March 2014 07:44:28PM 41 points [-]

If there's enough demand on LW I can write up a summary.

Please do.

Comment author: Eugine_Nier 25 February 2014 04:50:21AM *  39 points [-]

A pithy way of summarizing the above comment:

If someone tells you his cause is so important that lying for it is justified, assume he's lying.

Comment author: AlanCrowe 06 February 2014 08:47:28PM 41 points [-]

I think there is a tale to tell about the consumer surplus and it goes like this.

Alice loves widgets. She would pay $100 for a widget. She goes on line and finds Bob offering widgets for sale for $100. Err, that is not really what she had in mind. She imagined paying $30 for a widget, and feeling $70 better off as a consequence. She emails Bob: How about $90?

Bob feels like giving up altogether. It takes him ten hours to hand craft a widget and the minimum wage where he lives is $10 an hour. He was offering widgets for $150. $100 is the absolute minimum. Bob replies: No.

While Alice is deciding whether to pay $100 for a widget that is only worth $100 to her, Carol puts the finishing touches to her widget making machine. At the press of a button Carol can produce a widget for only $10. She activates her website, offering widgets for $40. Alice orders one at once.

How would Eve the economist like to analyse this? She would like to identify a consumer surplus of 100 - 40 = 60 dollars, and a producer surplus of 40 - 10 = 30 dollars, for a total gain from trade of 60 + 30 = 90 dollars. But before she can do this she has to telephone Alice and Carol and find out the secret numbers, $100 and $10. Only the market price of $40 is overt.

Alice thinks Eve is spying for Carol. If Carol learns that Alice is willing to pay $100, she will up the price to $80. So Alice bullshits Eve: Yeh, I'm regretting my purchase, I've rushed to buy a widget, but what's it worth really? $35. I've over paid.

Carol thinks Eve is spying for Alice. If Alice learns that they only cost $10 to make, then she will bargain Carol down to $20. Carol bullshits Eve: Currently they cost me $45 to make, but if I can grow volumes I'll get a bulk discount on raw materials and I hope to be making them for $35 and be in profit by 2016.

Eve realises that she isn't going to be able to get the numbers she needs, so she values the trade at its market price and declares GDP to be $40. It is what economist do. It is the desperate expedient to which the opacity of business has reduced them.

Now for the twist in the tale. Carol presses the button on her widget making machine, which catches fire and is destroyed. Carol gives up widget making. Alice buys from Bob for $100. Neither is happy with the deal; the total of consumer surplus and producer surplus is zero. Alice is thinking that she would have been happier spending her $100 eating out. Bob is thinking that he would have had a nicer time earning his $100 waiting tables for 10 hours.

Eve revises her GDP estimate. She has committed herself to market prices, so it is up 150% at $100. Err, that is not what is supposed to happen. Vital machinery is lost in a fire, prices soar and goods are produced by tedious manual labour, the economy has gone to shit, producing no surplus instead of producing a $90 surplus. But Eve's figures make this look good.

I agree that there is a problem with the consumer surplus. It is too hard to discover. But the market price is actually irrelevant. Going with the number you can get, even though it doesn't relate to what you want to know is another kind of fake, in some ways worse.

Disclaimer: I'm not an economist. Corrections welcomed.

Comment author: buybuydandavis 07 January 2014 07:41:42AM 32 points [-]

The author apparently has the privilege of living in a bubble where everyone she knows fundamentally approves of all her opinions, but occasionally has one person out of 20 show up at a gathering who disagrees, and just may throw a fit if that person dare voice their opinions.

Me - atheist, egoist, libertarian - I'm lucky if one person out of 20 won't think I'm the devil if I'm open about my opinions. I weep for the discomfort she feels when my existence impinges on her awareness.

I note that a Christian or Muslim describing how they are hurt by those who dare openly(!) question their sacred values wouldn't receive such polite consideration, and certainly not by this blogger.

Comment author: NancyLebovitz 06 January 2014 08:00:47AM *  34 points [-]

I like Less Wrong-- there are courtesy rules here which keep it from going wrong in ways which are common in SJ circles. People get credit for learning rather than being expected to get everything right, and it's at least somewhat unusual to attack people for having bad motivations.

This being said, there are squicky features here, and I'm not just talking about claims that women are different from men-- oddly enough, it generally (always?) seems to be to women's disadvantage, even though there's some evidence that women are more trustworthy at running banks and investment funds.

I tolerate posts like this, but LW would seem like a friendlier place (to me) and possibly even be more rational if articles about gender issues would take utility for men and women equally seriously.

Reactionaries had something of a home here-- less so after the formation of More Right, I think. I haven't seen evidence of anything especially extreme on the egalitarian side, though there might be as good a rationalist case to be made for thorough reparations. Now that I think about it, I haven't even seen a case made for strong economic support for intelligent poor children.

Trolley problems..... I keep getting an impression that the point is that people don't have enough inhibitions against killing for the greater good. (By the way, how easy do you think it would be to move an unwilling person who weighs a good bit more than you do?)

And torture seems to be taken too lightly. It's a real world problem, not just a token to be passed around in arguments.

What the original post made me realize is that what I consider most certain to be valuable at LW is the instrumental rationality material, and it would be a good thing for there to also be an online site for instrumental rationality without the "let's do low-empathy discussions to prove how rational we are" angle.

Comment author: gwern 03 December 2013 08:47:57PM *  41 points [-]

Won a big bet, made more money off Bitcoin than I have ever earned normally, bet the world like a boss, did some neat statistics, finished transcribing a great novel, interviewed with Mike Power & the BBC, doxed a drug lord.

Comment author: BT_Uytya 30 July 2013 10:30:25PM *  40 points [-]

The first terrifying shock comes when you realize that the rest of the world is just so incredibly stupid.

The second terrifying shock comes when you realize that they're not the only ones.

-- Nominull3 here, nearly six-years old quote

Comment author: lukeprog 14 July 2013 08:21:15AM 40 points [-]

What do you think [Yvain's] writing style is?

Not sure what I'd call it, but I agree with Michael Vassar that the day Yvain began "Real Work" was "a tragic day for literary history."

Comment author: shev 18 June 2013 02:01:45AM *  41 points [-]

I strongly disagree with the approaches usually recommended online, which involve some mixture of sites like CodeAcademy and looking into open source projects and lots of other hard-to-motivate things. Maybe my brain works differently, but those never appealed to me. I can't do book learning and I can't make myself up and dedicate to something I'm not drawn to already. If you're similar, try this instead:

  1. Pick a thing that you have no idea how to make.
  2. Try to make it.

Now, when I say "try"... new programmers often envision just sitting down and writing, but when they try it they realize they have no idea how to do anything. Their mistake is that, actually, sitting down and knowing what to do is just not what coding is like. I always surprise people who are learning to code with this fact: when I'm writing code in any language other than my main ones (Java, mostly..), I google something approximately once every two minutes. I spend most of my time searching for how to do even the most basic things. When it's time to actually make something work, it's usually just a few minutes of coding after much more time spent learning.

You should try to make the "minimum viable product" of whatever you want to make first.

If it's a game, get a screen showing - try to do it in less than an hour. Don't get sidetracked by anything else; get the screen up. Then get a character moving with arrow keys. Don't touch anything until you have a baseline you can iterate on, because every change you make should be immediately reflected in the product. Until you can see quick results from your hard work you're not going to get sucked in.

If it's a website or a product, get the server running in less than an hour. Pick a framework and a platform and go - don't get caught on the details. Setting up websites is secretly easy (python -m SimpleHTTPServer !) but if you've never done it you won't know that. If you need one set up a database right after. Get started quickly. It's possible with almost every architecture if you just search for cheat sheets and quick-start guides and stuff. You can fix your mistakes later, or start again if something goes wrong.

If you do something tedious, automate it. I have a shell script that copies some Javascript libraries and Html/JS templates into a new Dropbox folder and starts a server running there so I can go from naming my project to having an iterable prototype with some common elements I always reuse in less than five minutes. That gets me off the ground much faster and in less than 50 lines of script.

If you like algorithms or math or whatever, sure, do Project Euler or join TopCoder - those are fun. The competition will inspire some people to be fantastic at coding, which is great. I never got sucked in for some reason, even though I'm really competitive.

If you use open source stuff, sure, take a look at that. I'm only motivated to fix things that I find lacking in tools that I use, which in practice has never lead to my contributing to open source. Rather I find myself making clones of closed software so I can add features to it..

Oh, and start using Git early on. It's pretty great. Github is neat too and it basically acts as a resume if you go into programming, which is neat. But remember - setting it up is secretly easy, even if you have no idea what you're doing. Somehow things you don't understand are off-putting until you look back and realize how simple they were.

Hmm, that's all that comes to mind for now. Hope it helps.

Comment author: Zaine 04 May 2013 04:41:19AM *  40 points [-]

I will quote at length here.

3. The Argument
1. Most sweatshop workers choose to accept the conditions of their employ ment, even if their choice is made from among a severely constrained set of options.18
2. The fact that they choose the conditions of their employment from within a constrained set of options is strong evidence that they view it as their most-preferred option (within that set).
3. The fact that they view it as their most-preferred option is strong evidence that we will harm them by taking that option away. 4. It is also plausible that sweatshop workers' choice to accept the condi tions of their employment is sufficiently autonomous that taking the option of sweatshop labor away from them would be a violation of their autonomy. 5. All else being equal, it is wrong to harm people or to violate their autonomy.
6. Therefore, all else being equal, it is wrong to take away the option of sweat shop labor from workers who would otherwise choose to engage in it.


5. Challenges to The Argument
I will discuss three potential vulnerabilities in The Argument. One potential vulnerability centers on premises 1, 2, and 4, and stems from possible failures of rationality and/or freedom (which I will group together as failures of voluntariness) in sweatshop workers' consent. The second is located in premise 3, and derives from a possibly unwarranted assumption regarding the independence of a potential worker's antecedent choice-set and the offer of employment by a sweatshop. A final criticism of The Argument is centered on the conclusion (6) and holds that even if everything in premises 1-5 is true, it nevertheless ignores a crucial moral consideration. That consideration is the wrongfulness of exploitation?for one can wrongfully exploit an individual even while one provides them with options better than any of their other available alternatives.
a. Failures of Voluntariness
The first premise states that sweatshop workers choose the conditions of their employment, even if that choice is made from among a severely constrained set of options. And undoubtedly, the set of options available to potential sweatshop workers is severely constrained indeed. Sweatshop workers are usually extremely poor and seeking employment to provide for the necessities of life, so prolonged unemploy ment is not an option. They lack the education necessary to obtain higher-paying jobs, and very often lack the resources to relocate to where better low-skill jobs are available. Given these dire economic circumstances, do sweatshop workers really make a "choice" in the relevant sense at all? Should we not say instead, with John Miller, that whatever "choice" sweatshop workers make is made only under the "coercion of economic necessity" (Miller, 2003: 97)? And would not such coercion undermine the morally transformative power of workers' choices?
I do not think so.38 The mugging case discussed in section two shows that while coercion may undermine some sorts of moral transformation effected by choice, it does not undermine all sorts.39 Specifically, the presence of coercion does not license third parties to disregard the stated preferences of the coerced party by interfering with their activity. After all, one of the main reasons that coercion is bad is because it reduces our options. The mugger in the case above, for instance, takes away our option of continuing our life and keeping our money, and limits our choices to two — give up the money or die. Poverty can be regarded as coercive because it, too, reduces our options. Poverty reduces the options of many sweatshop workers, for instance, to a small list of poor options — prostitution, theft, sweatshop labor, or starvation. This is bad. But removing one option from that short list — indeed, removing the most preferred option — does not make things any better for the worker. The coercion of poverty reduces a worker's options, but so long as he is still free to choose from among the set of options available to him, we will do him no favors by reducing his options still further. Indeed, to do so would be a further form of coercion, not a cure for the coercion of poverty.40

[C]40. See Radcliffe Richards, 1996: 382.
[Radcliffe Richards, Janet (1996). Nepharious goings on: Kidney sales and moral arguments. Journal of Medicine and Philosophy 21 (4):375--416.]


[C]74. For instance, in 1992, the United States congress was considering legislation known as the "Child Labor Deterrence Act." The purpose of this act was to prevent child labor by preventing the importation into the United States of any goods made, in whole or in part, by children under the age of 15. The Act never received enough support to pass, but while it was being debated, employers in several countries where child labor was widespread took preemptive action in order to maintain their ability to export to the lucrative U.S. market. One of these employers was the garment industry in Bangladesh. According to UNICEF's 1997 "State of the World's Children" report, approximately 50,000 children were laid off in 1993 in anticipation of the bill's passage. Most of these children had little education, and few other opportunities to acquire one or to obtain alternative legal employment. As a result, many of these children turned to street hustling, stone crushing, and prostitution — all of which, the report notes, are much more hazardous and exploitative than garment production (UNICEF, 1997: 60).
["Sweatshops, Choice, and Exploitation"
Matt Zwolinski
Business Ethics Quarterly, Vol. 17, No. 4 (Oct., 2007), pp. 689-727
Published by: Philosophy Documentation Center
Article Stable URL: http://www.jstor.org/stable/27673206]

From the UNICEF report:

An Agreement In Bangledesh
An important initiative to protect child workers is unfolding in Bangladesh. The country’s powerful garment industry is committing itself to some dramatic new measures by an agreement signed in 1995. The country is one of the world’s major garment exporters, and the industry, which employs over a million workers, most of them women, also employed child labour. In 1992, between 50,000 and 75,000 of its workforce were children under 14, mainly girls. The children were illegally employed according to national law, but the situation captured little attention, in Bangladesh or elsewhere, until the garment factories began to hide the children from United States buyers or lay off the children, following the introduction of the Child Labor Deterrence Act in 1992 by US Senator Tom Harkin. The Bill would have prohibited the importation into the US of goods made using child labour. Then, when Senator Harkin reintroduced the Bill the following year, the impact was far more devastating:garment employers dismissed an estimated 50,000 children from their factories, approximately 75 per cent of all children in the industry. The consequences for the dismissedchildren and their parents were not anticipated. The children may have been freed, but at the same time they were trapped in a harsh environment with no skills, little or no education, and precious few alternatives. Schools were either inaccessible, useless or costly. A series of follow-up visits by UNICEF, local non-governmental organizations (NGOs) and the International Labour Organization (ILO) discovered that children went looking for new sources of income, and found them in work such as stone-crushing, street hustling and prostitution — all of them more hazardous and exploitative than garment production. In several cases, the mothers of dismissed children had to leave their jobs in order to look after their children. Out of this unhappy situation and after two years of difficult negotiations, a formal Memorandum of Understanding was signed in July 1995 by the Bangladesh Garment Manufacturers and Exporters Association (BGMEA), and the UNICEF and ILO offices in Bangladesh. The resulting programme was to be funded by these three organizations. BGMEA alone has committed about $1 million towards the implementation of the Memorandum of Understanding. Under the terms of the agreement, four key provisions were formulated:
• the removal of all under-age workers — those below 14 — within a period of four months;
• no further hiring of under-age children;
• the placement of those children removed from the garment factories in appropriate educational programmes with a monthly stipend;
• the offer of the children’s jobs to qualified adult family members.
The Memorandum of Understanding explicitly directed factory owners, in the best interests of these children, not to dismiss any child workers until a factory survey was completed and alternative arrangements could be made for the freed children....
[http://origin-www.unicef.org/spanish/publications/files/pub_sowc97_en.pdf. Panel 12; pg.60.]

Conclusion:
The argument that closing sweatshops leads to prostitution appears a valid one, as according to a 1997 report by UNICEF, it happened once in Bangladesh. According to that same report, provisions were established to prevent it from happening again (in Bangladesh).
(Personal opinion: There's too little evidence to determine whether the argument is actually sound. It happened once, though, and I find little reason to assume conditions in other countries are so different than they were in Bangladesh. However, thus concluding that sweatshops are good would be a misstep. One should rather conclude that if one is to close a sweatshop, provide alternative employment or enable and equip the workers to find their own alternative employment.)

Comment author: CarlShulman 09 April 2013 01:59:45AM *  40 points [-]

Luckily, my firm started collecting data on teacher aptitude some time ago, and basically you can separate all advanced math teachers easily into two categories: Okay with blacks in their classroom. Blacks and whites both end up succeeding at equal rates. Not okay with blacks in their classroom. Whites end up succeeding, blacks end up failing.

There are a number of research groups tracking teachers and student test scores. If such results had been released anywhere, wouldn't they be front page national news? And this seems like something that, e.g. the Gates Foundation would want known: if true, it's a magic bullet.

Why haven't academics and foundations studying teacher quality and value-added metrics reported such results?

A certain principle...remember that as principle of a school

The word is "principal."

Comment author: Daniel_Burfoot 01 April 2013 01:12:15PM 41 points [-]

Robin used a Dirty Math Trick that works on us because we're not used to dealing with large numbers. He used a large time scale of 12000 years, and assumed exponential growth in wealth at a reasonable rate over that time period. But then for depreciating the value of the wealth due to the fact that the intended recipients might not actually receive it, he used a relatively small linear factor of 1/1000 which seems like it was pulled out of a hat.

It would make more sense to assume that there is some probability every year that the accumulated wealth will be wiped out by civil war, communist takeover, nuclear holocaust, etc etc. Even if this yearly probability were small, applied over a long period of time, it would still counteract the exponential blowup in the value of the wealth. The resulting conclusion would be totally dependent on the probability of calamity: if you use a 0.01% chance of total loss, then you have about a 30% chance of coming out with the big sum mentioned in the article. But if you use a 1% chance, then your likelihood of making it to 12000 years with the money intact is 4e-53.

In response to LW Women: LW Online
Comment author: NancyLebovitz 15 February 2013 09:04:38AM 34 points [-]

I'm ok with the general emotional tone (lack of tone?) here. I think I read the style of discussion as "we're all here to be smart at each other, and we respect each other for being able to play".

However, the gender issues have been beyond tiresome. My default is to assume that men and women are pretty similar. LW has been the first place which has given me the impression that men and women are opposed groups. I still think they're pretty similar. The will to power is a shared trait even if it leads to conflict between opposed interests.

LW was the first place I've been where women caring about their own interests is viewed as a weird inimical trait which it's only reasonable to subvert, and I'm talking about PUA.

I wish I could find the link, but I remember telling someone he'd left women out of his utilitarian calculations. He took it well, but I wish it hadn't been my job to figure it out and find a polite way to say it.

Remember that motivational video Eliezer linked to? One of the lines toward the end was "If she puts you in the friend zone, put her in the rape zone." I can't imagine Eliezer saying that himself, and I expect he was only noticing and making use of the go for it and ignore your own pain slogans-- but I'm still shocked and angry that it's possible to not notice something like that. It's all a matter of who you identify with. Truth is truth, but I didn't want to find out that the culture had become that degraded.

And going around and around with HughRustik about PUA.... I think of him as polite and intelligent, and it took me a long time to realize that I kept saying that what I knew about PUA was what I'd read at LW, and he kept saying that it wasn't all like Roissy, who I kept saying I hadn't read. I grant that this is well within the normal range of human pigheadedness, and I'm sure I've done such myself because it can be hard to register that people hate what you love, but it was pretty grating to be on the receiving end of it.

There was that discussion of ignoring good test results from a member of a group if you already believe that they're bad at whatever was being tested. (They were referred to as blues, but it seemed to be a reference to women and math.) It was a case of only identifying with the gatekeeper. No thought about the unfairness or the possible loss of information. I think it finally occurred to someone to give a second test rather than just assuming it was a good day or good luck.

Unfortunately, I don't have an efficient way of finding these discussions I remember-- I'll grateful if anyone finds links, and then we can see how accurate my memories were.

All this being said, I think LW has also become Less Awful so far as gender issues are concerned. I'm not sure how much anyone has been convinced that women have actual points of view (partly my fault because I haven't been tracking individuals) since there are still the complaints about what one is not allowed to say.

Comment author: Eliezer_Yudkowsky 10 January 2013 03:50:40PM 35 points [-]

Lots of strawmanning going on here (could somebody else please point these out? please?) but in case it's not obvious, the problem is that what you call "heuristic safety" is difficult. Now, most people haven't the tiniest idea of what makes anything difficult to do in AI and are living in a verbal-English fantasy world, so of course you're going to get lots of people who think they have brilliant heuristic safety ideas. I have never seen one that would work, and I have seen lots of people come up with ideas that sound to them like they might have a 40% chance of working and which I know perfectly well to have a 0% chance of working.

The real gist of Friendly AI isn't some imaginary 100% perfect safety concept, it's ideas like, "Okay, we need to not have a conditionally independent chance of goal system warping on each self-modification because over the course of a billion modifications any conditionally independent probability will sum to ~1, but since self-modification is initially carried out in the highly deterministic environment of a computer chip it looks possible to use crisp approaches that avert a conditionally independent failure probability for each self-modification." Following this methodology is not 100% safe, but rather, if you fail to do that, your conditionally independent failure probabilities add up to 1 and you're 100% doomed.

But if you were content with a "heuristic" approach that you thought had a 40% chance of working, you'll never think through the problem in enough detail to realize that your doom probability is not 60% but ~1, because only somebody holding themselves to a higher standard than "heuristic safety" would ever push their thinking far enough to realize that their initial design was flawed.

People at SI are not stupid. We're not trying to achieve lovely perfect safety with a cherry on top because we think we have lots of luxurious time to waste and we're perfectionists. I have an analysis of the problem which says that if I want something to have a failure probability less than 1, I have to do certain things because I haven't yet thought of any way not to have to do them. There are of course lots of people who think that they don't have to solve the same problems, but that's because they're living in a verbal-English fantasy world in which their map is so blurry that they think lots of things "might be possible" that a sharper map would show to be much more difficult than they sound.

I don't know how to take a self-modifying heuristic soup in the process of going FOOM and make it Friendly. You don't know either, but the problem is, you don't know that you don't know. Or to be more precise, you don't share my epistemic reasons to expect that to be really difficult. When you engage in sufficient detail with a problem of FAI, and try to figure out how to solve it given that the rest of the AI was designed to allow that solution, it suddenly looks that much harder to solve under sloppy conditions. Whereas on the "40% safety" approach, it seems like the sort of thing you might be able to do, sure, why not...

If someday I realize that it's actually much easier to do FAI than I thought, given that you use a certain exactly-right approach - so easy, in fact, that you can slap that exactly-right approach on top of an AI system that wasn't specifically designed to permit it, an achievement on par with hacking Google Maps to play chess using its route-search algorithm - then that epiphany will be as the result of considering things that would work and be known to work with respect to some subproblem, not things that seem like they might have a 40% chance of working overall, because only the former approach develops skill.

I'll leave that as my take-home message - if you want to imagine building plug-in FAI approaches, isolate a subproblem and ask yourself how you could solve it and know that you've solved it, don't imagine overall things that have 40% chances of working. If you actually succeed in building knowledge this way I suspect that pretty soon you'll give up on the plug-in business because it will look harder than building the surrounding AI yourself.

Comment author: alyssavance 21 June 2012 12:39:46AM 39 points [-]

Vote up this comment if you would be most likely to read a post on Less Wrong or another friendly blog.

Comment author: roystgnr 28 March 2012 04:11:57AM 41 points [-]

So Harry has an advanced intelligence of questionable tendencies locked away, but it's tantalizingly offering to be ultra useful to him if he'll only give it freer reign outside of its box?

This is sounding awfully familiar...

Comment author: Raemon 25 January 2012 10:57:30PM *  39 points [-]

"You should read the Sequences."
"It's not like Gandhi has little XML tags on that say 'moral' "
"It's not like natural selection put little XML tags there that say 'purpose'."
"Sure, I'd take a pill that made me bisexual."
"There's this really great fanfiction you should read."
"Oh man I wanna tile the universe with ice cream trees."
"I don't want to tile the universe with bananas and palm trees."

[Sequence that will be incredibly funny in context but is a terrible idea please for goodness' sake (literally) nobody film it]

"No, we're not a cult."
"We're not a cult or anything."
"It's not like organizations have little XML tags that say 'cult' and 'not cult'"
"Any organization with a good cause can have cult attractors"
"Look I'd explain it but there's a lot of inferential distance and I don't want to be condescending."
"We are not a cult. We are not a cult."

Comment author: [deleted] 25 January 2012 09:57:05PM *  40 points [-]

most of the deaths in the holocaust were caused by the allies bombing railroads that supplied food to the camps.

I think It would be technically illegal for me to participate or update away from my default position in such a hypothetical debate.

Comment author: XiXiDu 07 November 2011 10:48:33AM 41 points [-]

What would the SIAI do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal, would a lot of money alter your strategic plan significantly?

Comment author: wedrifid 07 November 2011 06:13:52AM *  40 points [-]

The staff and leadership at the SIAI seems to be undergoing a lot of changes recently. Is instability in the organisation something to be concerned about?

Comment author: Rain 14 May 2011 11:02:06PM *  39 points [-]

For every non-duplicate comment replying to this one praising me for my right action, I will donate $10 to SIAI, up to a cap of $1010, with the count ending on 1 June 2011. Also accepting private messages.

Edit: The cap was met on 30 May. Donation of $1010 made.

Comment author: lukeprog 07 February 2011 04:40:59AM *  38 points [-]

Conversation Strategies for Spreading Rationality Without Annoying People

Comment author: RichardChappell 30 January 2011 08:40:28PM *  40 points [-]

Eliezer's metaethics might be clarified in terms of the distinctions between sense, reference, and reference-fixing descriptions. I take it that Eliezer wants to use 'right' as a rigid designator to denote some particular set of terminal values, but this reference fact is fixed by means of a seemingly 'relative' procedure (namely, whatever terminal values the speaker happens to hold, on some appropriate [if somewhat mysterious] idealization). Confusions arise when people mistakenly read this metasemantic subjectivism into the first-order semantics or meaning of 'right'.

In summary:

(i) 'Right' means, roughly, 'promotes external goods X, Y and Z'

(ii) claim i above is true because I desire X, Y, and Z.

Note that Speakers Use Their Actual Language, so murder would still be wrong even if I had the desires of a serial killer. But if I had those violent terminal values, I would speak a slightly different language than I do right now, so that when KillerRichard asserts "Murder is right!" what he says is true. We don't really disagree, but are instead merely talking past each other.

Virtues of the theory:

(a) By rigidifying on our actual, current desires (or idealizations thereupon), it avoids Inducing Desire Satisfactions.

(b) Shifting the subjectivity out to the metasemantic level leaves us with a first-order semantic proposal that at least does a better job than simple subjectivism at 'saving the phenomena'. (It has echoes of Mark Schroeder's desire-based view of reasons, according to which the facts that give us reasons are the propositional contents of our desires, rather than the desires themselves. Or something like that.)

(c) It's naturalistic, if you find moral non-naturalism 'spooky'. (Though I'd sooner recommend Mackie-style error theory for naturalists, since I don't think (b) above is enough to save the phenomena.)

Objections

(1) It's incompatible with the datum that substantive, fundamental normative disagreement is in fact possible. People may share the concept of a normative reason, even if they fundamentally disagree about which features of actions are the ones that give us reasons.

(2) The semantic tricks merely shift the lump under the rug, they don't get rid of it. Standard worries about relativism re-emerge, e.g. an agent can know a priori that their own fundamental values are right, given how the meaning of the word 'right' is determined. This kind of (even merely 'fundamental') infallibility seems implausible.

(3) Just as simple subjectivism is an implausible theory of what 'right' means, so Eliezer's meta-semantic subjectivism is an implausible theory of why 'right' means promoting external goods X, Y, Z. An adequately objective metaethics shouldn't even give preferences a reference-fixing role.

Comment author: fubarobfusco 13 December 2010 05:39:48PM *  38 points [-]

The evil in creating house-elves is not that they like doing chores -- it is that they suffer and cannot do anything about it.

They are capable of feeling pain (and psychological anguish) but are incapable of defending themselves, avenging wrongs done to them, or even demanding better treatment. They are entirely dependent on specific humans (their house's family) for any happiness or satisfaction they might receive. They do not have the choice of leaving.

As such, they are perfect victims for abusers. Like human abuse victims, they can be provoked to self-blame and self-harm; but they do not (as far as we are told) have even the option of suicide. Their only possible hopes are to be inherited by a kind master or to entirely unintentionally dissatisfy their masters so much that they are freed -- and even this latter appears to cause significant psychological damage in some cases.

Creating house-elves (as they are presented in the novels) is not like creating servile robots. It is not like creating a being that enjoys the thought of being killed and eaten. It is like imbuing a punching-bag with the ability not only to feel pain, but to contemplate the horror of its lot in life.

Comment author: vericrat 03 March 2015 09:28:47PM 40 points [-]

"I wonder how difficult it would be to just make a list of all the top blood purists and kill them.

They'd tried exactly that during the French Revolution, more or less - make a list of all the enemies of Progress and remove everything above the neck..." -Harry's internal monologue, HPMOR Chapter 7

"Amusing, but that was not your first fleeting thought before you substituted something safer, less damaging. No, what you remembered was how you considered lining up all the blood purists and guillotining them. And now you are telling yourself you were not serious, but you were. If you could do it this very moment and no one would ever know, you would." -The Sorting Hat, HPMOR Chapter 10

Well...well I guess it wasn't technically a guillotine. And Harry didn't make a list himself. But Harry did do it, and set it up so no one would ever know.

Comment author: cata 09 July 2014 06:48:37AM *  40 points [-]

Because you underestimate how off-putting it is to people when things are deleted with no clear accountability or visibility. It's way worse than having an off-topic lousy post sitting on the page for a few days. It is like a hundred times worse.

You have to provide transparency (e.g. a "see deleted" section or a list of moderator actions) or rationale (e.g. Metafilter's deletion reasons and MetaTalk) or people get paranoid that there is weird, self-interested censorship and that the moderators aren't acting in the interests of the community. This is an Ancient Internet Feeling.

Comment author: trist 12 March 2014 03:00:19PM 35 points [-]

Irrationality Game: (meta, I like this idea)

Flush toilets are a horrible mistake. 7b/99%

Comment author: Gunnar_Zarncke 06 January 2014 10:58:16PM 40 points [-]

OK. I already mentioned that I'm preparing a longer post about it, but I didn't brag about it:

Despite being left by my wife for a younger guy after 15 years and 4 children I have acted sensible, rational you might say, and bought a house, negotiated a fair marriage contract (separation of property), completed a much overdue freelance project and cared for the children a lot. I didn't break. I didn't hate. I didn't run away. I think I succeeded in saving my sanity (salvaging instead of destroying emotions from the relationship), providing a dependable, caring and safe environment for the children now and in the future (the house is on the other side of the street). And get along well with my future ex-wife and her new partner and avoid alienation of her by family and friends.

Actually all of that didn't happen in December but it is effectively done now. The children moved into the new house on Jan, 1st and all paperwork and such is done.

Comment author: moridinamael 06 January 2014 12:45:19PM *  39 points [-]

It's funny, I am totally sympathetic to everything you wrote here, yet all I can think is, "my daily life is chock full of people incapable of grappling with trolley problems or discussing torture concretely, why are you trying to make LessWrong more like real life?"

Comment author: shminux 06 January 2014 07:15:47AM 37 points [-]

Just wanted to mention that an amazing amount of arguments in this thread and in the linked piece consists of misidentified non-central fallacies (in Yvain's labelling). None of the targets of the labels used ("racist", "eugenics", "feminist", what have you), correspond to a typical image evoked by using them.

Comment author: Vulture 05 January 2014 12:00:26AM *  38 points [-]

I spent my childhood believing I was destined to be a hero

in some far off magic kingdom.

It was too late when I realized that I was needed here.

--A Softer World

In response to Mistakes repository
Comment author: solipsist 09 September 2013 05:30:17AM 40 points [-]

Placing zero value on the ability to look, dress, and act like a non-nerd. I seriously overestimated the effort and underestimated the benefits.

Comment author: Yvain 11 August 2013 07:39:44AM *  39 points [-]

I like this idea, but dislike inflation of the word "debunking".

Debunking means something was bunk and has now been conclusively proven wrong .Homeopathy has been debunked, creationism has been debunked, ESP has been debunked.

But when people say things like "Haven't you heard Searle debunked materialism?" or "Here's a link to an argument debunking Obamacare" it seems kind of like epistemological arrogance. It's not just "I disagree with you", but "There is no other side to this, it is now disproven in the same sense creationism is disproven and we can all go home."

I sort of accept the Myers-Briggs link as a debunking, because that fits the central category of "supposedly scientific theory that in fact has very poor support". The others seem more like controversial philosophical or political arguments. They're all really good controversial philosophical/political arguments I agree with, but I bet by the time this list reaches twenty entries some of them won't be.

I admit I don't have a better phrase. "Skeptical Argument Repository"?

Comment author: VincentYu 06 April 2013 03:24:59AM *  34 points [-]

You claim that medical researchers are doing logical inference incorrectly. But they are in fact doing statistical inference and arguing inductively.

Statistical inference and inductive arguments belong in a Bayesian framework. You are making a straw man by translating them into a deductive framework.

Rephrased to say precisely what the study found:

This study tested and rejected the hypothesis that artificial food coloring causes hyperactivity in all children.

No. Mattes and Gittelman's finding is stronger than your rephrasing—your rephrasing omits evidence useful for Bayesian reasoners. For instance, they repeatedly pointed out that they “[studied] only children who were already on the Feingold diet and who were reported by their parents to respond markedly to artificial food colorings.” They claim that this is important because “the Feingold diet hypothesis did not originate from observations of carefully diagnosed children but from anecdotal reports on children similar to the ones we studied.” In other words, they are making an inductive argument:

  1. Most evidence for the Feingold diet hypothesis comes from anecdotal reports.
  2. Most of these anecdotal reports are mistaken.
  3. Thus, there is little evidence for the Feingold diet hypothesis.
  4. Therefore, the Feingold diet hypothesis is wrong.

If you translate this into a deductive framework, of course it will not work. Their paper should be seen in a Bayesian framework, and in this context, their final sentence

The results of this study indicate that artificial food colorings do not affect the behavior of school-age children who are claimed to be sensitive to these agents.

translates into a correct statement about the evidence resulting from their study.

This refereed medical journal article, like many others, made the same mistake as my undergraduate logic students, moving the negation across the quantifier without changing the quantifier. I cannot recall ever seeing a medical journal article prove a negation and not make this mistake when stating its conclusions.

They are not making this mistake. You are looking at a straw man.


Full-texts:

Comment author: therufs 08 March 2013 05:18:58PM 40 points [-]

If you are looking for employment, tell everyone you know. I have gotten 100% of my jobs from friends saying "hey, did you hear about this one".

Comment author: FiftyTwo 08 March 2013 05:51:16AM 39 points [-]

If you feel sad when you shouldn't feel sad consult a medical professional or therapist, they can help.

[Wish I'd realised that a few years ago.]

Comment author: Dr_Manhattan 07 March 2013 04:52:06PM 40 points [-]

Alternative: commute effectively. Taking a train to NYC from Long Island I get almost 2 hours to read/watch lectures or entertainment. Some days these are 2 best hours of the day.

Comment author: Mitchell_Porter 31 January 2013 04:19:04PM 38 points [-]

MIRI's number-one goal will be the discovery of a Consequentialist Anticipatory Logic that can save the world (codename, MIRI-CAL).

Comment author: Emile 29 January 2013 10:45:57AM 4 points [-]

Demo: upvote or downvote this comment all over the place.

In response to Just One Sentence
Comment author: ShardPhoenix 05 January 2013 06:02:09AM *  40 points [-]

If this is to be taken as a sort of prophetic/religious statement that will certainly be believed, how about this:

"It is better to rely on the labour of machines than the labour of beasts, and better to rely on the labour of beasts than the labour of man".

(Based on the idea that historically, technological progress was often disincentivized by the abundance of cheap/slave labour).

Comment author: buybuydandavis 26 December 2012 03:13:49AM 39 points [-]

Suggestion: I recommend sending people their deleted posts.

I find it annoying to spend the effort to type a post, only to have it disappear into a bit bucket. If you want it gone, that's your prerogative, but I think it is a breach of etiquette for a forum to destroy information created by a forum user.

Now I assume you found the original post a breach of etiquette, so may feel that tit for tat is the right policy here. I'd consider an intentional breach of etiquette as an unnecessary escalation.

Comment author: RomeoStevens 11 September 2012 11:01:22PM 35 points [-]

everyone needs to stop being such a cunt about it.

Comment author: gwern 04 August 2012 02:10:32PM *  34 points [-]

Let's take the outside view for a second. After all, if you want to save the planet from AIs, you have to do a lot of thinking! You have to learn all sorts of stuff and prove it and just generally solve a lot of eye-crossing philosophy problems which just read like slippery bullshit. But if you want to save the planet from asteroids, you can conveniently do the whole thing without ever leaving your own field and applying all the existing engineering and astronomy techniques. Why, you even found a justification for NASA continuing to exist (and larding out pork all over the country) and better yet, for the nuclear weapons program to be funded even more (after all, what do you think you'll be doing when the Shuttle gets there?).

Obviously, this isn't any sort of proof that anti-asteroid programs are worthless self-interested rent-seeking government pork.

But it sure does seem suspicious that continuing business as usual to the tune of billions can save the entire species from certain doom.

Comment author: Kindly 03 July 2012 10:51:20PM 24 points [-]

Irrationality game

0 and 1 are probabilities. (100%)

Comment author: [deleted] 12 June 2012 11:11:51PM *  38 points [-]

Glenn Beck also openly states that his books are ghost written. So maybe he's heard of Kurzweil and wants to talk about the singularity, but I'm guessing he's not the one reading SI publications.

Comment author: JoshuaZ 15 May 2012 03:00:56PM *  39 points [-]

and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Da

This confuses me since these people are not in agreement on some issues.

Comment author: Yvain 15 March 2012 06:10:44PM *  39 points [-]

Compare "intelligent" to "fast".

I say "The cheetah will win the race, because it is very fast."

This has some explanatory power. It can distinguish between various reasons a cheetah might win a race: maybe it had a head start, or its competitors weren't trying very hard, or it cheeted. Once we say "The cheetah won the race because it was fast" we know more than we did before.

The same is true of "General Lee won his battles because he was intelligent". It distinguishes the case in which he was a tactical genius from the case where he just had overwhelming numbers, or was very lucky, or had superior technology. So "intelligent" is totally meaningful here.

None of these are a lowest-level explanation. We can further explain a cheetah's fast-ness by talking about its limb musculature and its metabolism and so on, and we can further explain General Lee's intelligence by talking about the synaptic connections in his brain (probably). But we don't always need the lowest possible level of explanation; no one gets angry because we don't explain World War I by starting with "So, there were these electrons in Gavrilio Princip's brain that got acted upon by the electromagnetic force..."

A Mysterious Explanation isn't just any time you use a non-lowest-level explanatory word. It's when you explain something on one level by referring to something on the same level.

I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage.

Just as it is acceptable to say "General Lee won the battle because he was intelligent", so it is acceptable to say "The AI would conquer Rome because it was intelligent".

(just as it is acceptable to say "cavalry has an advantage over artillery because it is fast")

In fact, in the context of the quote, we were talking about the difference between a random modern human trying to take over Rome, and an AI trying to take over modern civilization. The modern human's advantage would be in technology and foreknowledge (as if General Lee won his battles by having plasma rifles and knowing all the North's moves in advance even though he wasn't that good a tactician); the AI might have those advantages, but also be more intelligent.

In response to comment by [deleted] on [Poll] Method of Recruitment
Comment author: [deleted] 06 February 2012 10:03:07PM *  38 points [-]

Vote Here for:

I would NOT like this comment removed OR indifference

Comment author: Alejandro1 26 January 2012 03:56:01AM 40 points [-]

Bear in mind that some contrarian statements might have been upvoted for being valuable as examples and contributions to the thread, rather than for substantial agreement. Also there is a selection effect: a contrarian sharing an unpopular opinion is very likely to upvote it when seeing a kindred spirit, but a non-contrarian who doesn't share it is unlikely to downvote it (especially in a thread like this one where the point is to encourage contrarian opinions to come out).

Comment author: Khoth 29 September 2011 10:36:16AM *  40 points [-]

Generally, if you're given evidence for something, the evidence-giver is trying to convince you of that something. If you're given only weak evidence, that itself is evidence that there is no strong evidence (if there is strong evidence, why didn't they tell you that instead?), and so in some circumstances it could be rational to downgrade your probability estimate.

In response to Consequentialism FAQ
Comment author: Vladimir_M 27 April 2011 01:03:19AM *  40 points [-]

OK, I've read the whole FAQ. Clearly, a really detailed critique would have to be given at similar length. Therefore, here is just a sketch of the problems I see with your exposition.

For start, you use several invalid examples, or at least controversial examples that you incorrectly present as clear-cut. For example, the phlogiston theory was nothing like the silly strawman you present. It was a falsifiable scientific theory that was abandoned because it was eventually falsified (when it was discovered that burning stuff adds mass due to oxidation, rather than losing mass due to escaped phlogiston). It was certainly a reductionist theory -- it attempted to reduce fire (which itself has different manifestations) and the human and animal metabolism to the same underlying physical process. (Google "Becher-Stahl theory".) Or, at another place, you present the issue of "opposing condoms" as a clear-cut case of "a horrendous decision" from a consequentialist perspective -- although in reality the question is far less clear.

Otherwise, up to Section 4, your argumentation is passable. But then it goes completely off the rails. I'll list just a few main issues:

  • In the discussion of the trolley problem, you present a miserable caricature of the "don't push" arguments. The real reason why pushing the fat main is problematic requires delving into a broader game-theoretic analysis that establishes the Schelling points that hold in interactions between people, including those gravest ones that define unprovoked deadly assault. The reason why any sort of organized society is possible is that you can trust that other people will always respect these Schelling points without regards to any cost-benefit calculations, except perhaps when the alternative to violating them is by orders of magnitude more awful than in the trolley examples. (I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.)

  • In Section 5, you don't even mention the key problem of how utilities are supposed to be compared and aggregated interpersonally. If you cannot address this issue convincingly, the whole edifice crumbles.

  • In Section 6, at first it seems like you get the important point that even if we agree on some aggregate welfare maximization, we have no hope of getting any practical guidelines for action beyond quasi-deontologist heuristics. But they you boldly declare that "we do have procedures in place for breaking the heuristic when we need to." No, we don't. You may think we have them, but what we actually have are either somewhat more finely tuned heuristics that aren't captured by simple first-order formulations (which is good), or rationalizations and other nonsensical arguments couched in terms of a plausible-sounding consequentialist analysis (which is often a recipe for disaster). The law of unintended consequences often bites even in seemingly clear-cut "what could possibly go wrong?" situations.

  • Along similar lines, you note that in any conflict all parties are quick to point out that their natural rights are at stake. Well, guess what. If they just have smart enough advocates, they can also all come up with different consequentialist analyses whose implications favor their interests. Different ways of interpersonal utility comparison are often themselves enough to tilt the scales as you like. Further, these analyses will all by necessity be based on spherical-cow models of the real world, which you can usually engineer to get pretty much any implication you like.

  • Section 7 is rather incoherent. You jump from one case study to another arguing that even when it seems like consequentialism might imply something revolting, that's not really so. Well, if you're ready to bite awful consequentialist bullets like Robin Hanson does, then be explicit about it. Otherwise, clarify where exactly you draw the lines.

  • Since we're already at biting bullets, your FAQ fails to address another crucial issue: it is normal for humans to value the welfare of some people more than others. You clearly value your own welfare and the welfare of your family and friends more than strangers (and even for strangers there are normally multiple circles of diminishing caring). How to reconcile this with global maximization of aggregate utility? Or do you bite the bullet that it's immoral to care about one's own family and friends more than strangers?

  • Question 7.6 is the only one where you give even a passing nod to game-theoretical issues. Considering their fundamental importance in the human social order and all human interactions, and their complex and often counter-intuitive nature, this fact by itself means that most of your discussion is likely to be remote from reality. This is another aspect of the law of unintended consequences that you nonchalantly ignore.

  • Finally, your idea that it is possible to employ economists and statisticians and get accurate and objective consequentialist analysis to guide public policy is altogether utopian. If such things were possible, economic central planning would be a path to prosperity, not the disaster that it is. (That particular consequentialist folly was finally abandoned in the mainstream after it had produced utter disaster in a sizable part of the world, but many currently fashionable ideas about "scientific" management of government and society suffer from similar delusions.)

View more: Prev | Next