Filter All time
Comment author: [deleted] 05 April 2011 07:18:41PM 68 points [-]

If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer?

Your question has the form:

If A is nothing but B, then why is it X to do Y to A but not to do Y to C which is also nothing but B?

This following question also has this form:

If apple pie is nothing but atoms, why is it safe to eat apple pie but not to eat napalm which is also nothing but atoms?

And here's the general answer to that question: the molecules which make up apple pie are safe to eat, and the molecules which make up napalm are unsafe to eat. This is possible because these are not the same molecules.

Now let's turn to your own question and give a general answer to it: it is morally wrong to shut off the program which makes up a human, but not morally wrong to shut off the programs which are found in an actual computer today. This is possible because these are not the same programs.

At this point I'm sure you will want to ask: what is so special about the program which makes up a human, that it would be morally wrong to shut off the program? And I have no answer for that. Similarly, I couldn't answer you if you asked me why the molecules of apple pie are safe to eat and the those of napalm are not.

As it happens, chemistry and biology have probably advanced to the point at which the question about apple pie can be answered. However, the study of mind/brain is still in its infancy, and as far as I know, we have not advanced to the equivalent point. But this doesn't mean that there isn't an answer.

Comment author: Anatoly_Vorobey 18 January 2011 04:47:07PM *  68 points [-]

I'm skeptical, and will now proceed to question some of the assertions made/references cited. Note that I'm not trained in statistics.

Unfortunately, most of the articles cited are not easily available. I would have liked to check the methodology of a few more of them.

For example, one SPR developed in 1995 predicts the price of mature Bordeaux red wines at auction better than expert wine tasters do.

The paper doesn't actually establish what you say it does. There is no statistical analysis of expert wine tasters, only one or two anecdotal statements of their fury at the whole idea. Instead, the SPR is compared to actual market prices - not to experts' predictions. I think it's fair to say that the claim I quoted is overreached.

Now, about the model and its fit to data. Note that the SPR is older than 1995, when the paper was published. The NYTimes article about it which you reference is from 1990 (the paper bizarrely dates it to 1995; I'm not sure what's going on there).

The fact that there's a linear model - not specified precisely anywhere in the article - which is a good fit to wine prices for vintages of 1961-1972 (Table 3 in the paper) is not, I think, very significant on its own. To judge the model, we should look at what it predicts for upcoming years. Both the paper and the NYTimes article make two specific predictions. First, the 1986 vintage, claimed to be extolled by experts early on, will prove mediocre because of the weather conditions that year (see Figure 3 in the paper - 1986 is clearly the worst of the 80ies). NYTimes says "When the dust settles, he predicts, it will be judged the worst vintage of the 1980's, and no better than the unmemorable 1974's or 1969's". The 1995 paper says, more modestly, "We should expect that, in due course, the prices of these wines will decline relative to the prices of most of the other vintages of the 1980s". Second, the 1989-1990 is predicted to be "outstanding" (paper), "stunningly good" (NYTimes), "adjusted for age, will outsell at a significant premium the great 1961 vintage (NYTimes)."

It's now 16 years later. How do we test these predictions?

First, I've stumbled on a different paper from the primary author, Prof. Ashenfelter, from 2007. Published 12 years later than the one you reference, this paper has substantially the same contents, with whole pages copied verbatim from the earlier one. That, by itself, worries me. Even more worrying is the fact that the 1986 prediction, prominent in the 1990 article and the 1995 paper, is completely missing from the 2007 paper (the data below might indicate why). And most worrying of all is the change of language regarding the 1989/1990 prediction. The 1995 paper says about its prediction that the 1989/1990 will turn out to be outstanding, "Many wine writers have made the same predictions in the trade magazines". The 2007 paper says "Ironically, many professional wine writers did not concur with this prediction at the time. In the years that have followed minds have been changed; and there is now virtually unanimous agreement that 1989 and 1990 are two of the outstanding vintages of the last 50 years."

Uhm. Right. Well, because the claims aren't strong enough, they do not exactly contradict each other, but this change leaves a bad taste. I don't think I should give much trust to these papers' claims.

The data I could find quickly to test the predictions is here. The prices are broken down by the chateaux, by the vintage year, the packaging (I've always chosen BT - bottle), and the auction year (I've always chosen the last year available, typically 2004). Unfortunately, Ashenfelter underspecifies how he came up with the aggregate prices for a given year - he says he chose a package of the best 15 wineries, but doesn't say which ones or how the prices are combined. I used 5 wineries that are specified as the best in the 2007 paper, and looked up the prices for years 1981-1990. The data is in this spreadsheet. I haven't tried to statistically analyze it, but even from a quick glance, I think the following is clear. 1986 did not stabilize as the worst year of the 1980s. It is frequently second- or third-best of the decade. It is always much better than either 1984 or 1987, which are supposed to be vastly better according to the 1995 paper's weather data (see Figure 3). 1989/1990 did turn out well, especially 1990. Still, they're both nearly always less expensive than 1982, which is again vastly inferior in the weather data (it isn't even in the best quarter). Overall, I fail to see much correlation between the weather data in the paper for the 1980s, the specific claims about 1986 and 1989/1990, and the market prices as of 2004. I wouldn't recommend using this SPR to predict market prices.

Now, this was the first example in your post, and I found what I believe to be substantial problems with its methodology and the quality of its SPR. If I were to proceed and examine every example you cite in the same detail, would I encounter many such problems? It's difficult to tell, but my prediction is "yes". I anticipate overfitting and shoddy methodology. I anticipate huge influence of the selection bias - the authors that publish these kinds of papers will not publish a paper that says "The experts were better than our SPR". And finally, I anticipate overreaching claims of wide-reaching applicability of the models, based on papers that actually indicate modest effect in a very specific situation with a small sample size.

I've looked at your second example:

Howard and Dawes (1976) found they can reliably predict marital happiness with one of the simplest SPRs ever conceived, using only two cues: P = [rate of lovemaking] - [rate of fighting].

I couldn't find the original paper, but the results are summarised in Dawes (1979). Looking at it, it turns out that when you say "predict marital happiness", it really means "predicts one of the partners' subjective opinion of their marital happiness" - as opposed to e.g. stability of the marriage over time. There's no indication as to how the partner to question was chosen from each pair (e.g. whether the experimenter knew the rate when they chose). There was very good correlation with binary outcome (happy/unhappy), but when a finer scale of 7 degrees of happiness was used, the correlation was weak - rate of 0.4. In a follow-up experiment, correlation rate went up to 0.8, but there the subject looked at the lovemaking/fighting statistics before opining on the degree of happiness, thus contaminating their decision. And even in the earlier experiment, the subject had been recording those lovemaking/fighting statistics in the first place, so it would make sense for them to recall those events when they're asked to assess whether their marriage is a happy one. Overall, the model is witty and naively appears to be useful, but the suspect methodology and the relatively weak correlation encourages me to discount the analysis.

Finally, the following claim is the single most objectionable one in your post, to my taste:

If you're hiring, you're probably better off not doing interviews.

My own experience strongly suggests to me that this claim is inane - and is highly dangerous advice. I'm not able to view the papers you base it on, but if they're anything like the first and second example, they're far, far away from convincing me of the truth of this claim, which I in any case strongly suspect to overreach gigantically over what the papers are proving. It may be true, for example, that a very large body of hiring decision-makers in a huge organisation or a state on average make poorer decisions based on their professional judgement during interviews than they would have made based purely on the resume. I can see how this claim might be true, because any such very large body must be largely incompetent. But it doesn't follow that it's good advice for you to abstrain from interviewing - it would only follow if you believe yourself to be no more competent than the average hiring manager in such a body, or in the papers you reference. My personal experience from interviewing many, many candidates for a large company suggests that interviewing is crucial (though I will freely grant that different kinds of interviews vary wildly in their effectiveness).

Comment author: Wei_Dai 24 July 2009 11:02:03PM 69 points [-]
  • It was easier for Eliezer Yudkowsky to reformulate decision theory to exclude time than to buy a new watch.
  • Eliezer Yudkowsky's favorite sport is black hole diving. His information density is so great that no black hole can absorb him, so he just bounces right off the event horizon.
  • God desperately wants to believe that when Eliezer Yudkowsky says "God doesn't exist," it's just good-natured teasing.
  • Never go in against Eliezer Yudkowsky when anything is on the line.
Comment author: Yvain 22 March 2009 12:42:03PM *  67 points [-]

I read recently an article on charitable giving which mentioned how people split up their money among many different charities to, as they put it, "maximize the effect", even though someone with this goal should donate everything to the single highest-utility charity. And this seems a bit like the example you cited where, if blue cards came up randomly 75% of the time and red cards came up 25% of the time, people would bet on blue 75% of the time even though the optimal strategy is blue 100%. All this seems to come from concepts like "Don't put all your eggs in one basket", which is a good general rule for things like investing but can easily break down.

I find myself having to fight this rule for a lot of things, and one of them is beliefs. If all of my opinions are Eliezer-ish, I feel like I'm "putting all my eggs in one basket", and I need to "diversify".You use book recommendations as a reductio, but I remember reading about half the books on your recommended reading list, thinking "Does reading everything off of one guy's reading list make me a follower?" and then thinking "Eh, as soon as he stops recommending such good books, I'll stop reading them."

The other thing is the Outside View summed up by the proverb "If two people think alike, one of them isn't thinking." In the majority of cases I observe where a person conforms to all of the beliefs held by a charismatic leader of a cohesive in-group, and keeps praising that leader's incredible insight, that person is a sheeple and that leader has a cult (see: religion, Objectivism, various political movements). I respect the Outside View enough that I have trouble replacing it with the Inside View that although I agree with Eliezer about nearly everything and am willing to say arbitrarily good things about him, I'm certainly not a cultist because I'm coming to my opinions based on Independent Logic and Reason. I don't know any way of solving this problem except the hard way.

"note: Hofstadter does not have a cult"

I tried to start a Hofstadter cult once. The first commandment was "Thou shalt follow the first commandment." The second commandment was "Thou shalt follow only those even-numbered commandments that do not exhort thee to follow themselves." I forget the other eight. Needless to say it didn't catch on.

Comment author: Sigmaleph 23 October 2014 02:14:57AM 68 points [-]

Did the survey. Also, now I know my digit ratio!

Comment author: James_Miller 05 September 2014 08:36:09PM 69 points [-]

A skilled professional I know had to turn down an important freelance assignment because of a recurring commitment to chauffeur her son to a resumé-building “social action” assignment required by his high school. This involved driving the boy for 45 minutes to a community center, cooling her heels while he sorted used clothing for charity, and driving him back—forgoing income which, judiciously donated, could have fed, clothed, and inoculated an African village. The dubious “lessons” of this forced labor as an overqualified ragpicker are that children are entitled to treat their mothers’ time as worth nothing, that you can make the world a better place by destroying economic value, and that the moral worth of an action should be measured by the conspicuousness of the sacrifice rather than the gain to the beneficiary.

Steven Pinker

Comment author: JoshuaZ 03 February 2012 05:33:42AM 66 points [-]

Doctor Slithingly watched the readout on the computer screen and rubbed his hands together. ‘Excellent,’ he muttered, his voice a thin, rasping hiss. ‘Excellent!’ He laughed to himself in a chilling falsetto. ‘Soon my plan will come to fruition. Soon I will destroy them all!’ The room resounded with the sound of his insane giggling. This was the culmination of years of research – years of testing tissue samples and creating unnatural biological hybrids – but now it was over. Now, finally, he would destroy them all – every single type and variation of leukaemia. In doing so, he would render useless the work of thousands of charitable organisations as well as denying medical professionals the world over a source of income. He would prevent the publication of hundreds of inspiring stories of survival and sacrifice which might otherwise have sold millions of copies worldwide. ‘Bwahaha!’ he laughed. ‘So long, you meddling haematological neoplasm, you!’

Joel Stickley, How To Write Badly Well

Comment author: RichardKennaway 01 February 2010 10:16:33AM 68 points [-]

From a BBC interview with a retiring Oxford Don:

Don: "Up until the age of 25, I believed that 'invective' was a synonym for 'urine'."

BBC: "Why ever would you have thought that?"

Don: "During my childhood, I read many of the Edgar Rice Burroughs 'Tarzan' stories, and in those books, whenever a lion wandered into a clearing, the monkeys would leap into the trees and 'cast streams of invective upon the lion's head.'"

BBC: <long pause> "But, surely sir, you now know the meaning of the word."

Don: "Yes, but I do wonder under what other misapprehensions I continue to labour."

Comment author: Solvent 02 February 2012 06:03:59AM 68 points [-]

And here, according to Trout, was the reason human beings could not reject ideas because they were bad: Ideas on Earth were badges of friendship or enmity. Their content did not matter. Friends agreed with friends, in order to express friendliness. Enemies disagreed with enemies, in order to express enmity. The ideas Earthlings held didn’t matter for hundreds of thousands of years, since they couldn’t do much about them anyway. Ideas might as well be badges as anything.

Kurt Vonnegut, Breakfast of Champions

Comment author: grouchymusicologist 21 October 2011 04:47:23AM *  66 points [-]

A handful of points, without any particular axe to grind, from a professional music scholar:

(1) The Great Fugue is difficult to like, difficult to know what to make of -- even most of its passionate advocates would agree to that -- and there's no particular reason to think that opinions from wildly positive to wildly negative are not all within the realm of the reasonable responses to this piece. A huge amount of scholarly ink has been spilled on why it, and the late string quartets, and the Missa Solemnis, are so peculiar.

(2) Relatedly, people who love it and think that it's obviously, uncomplicatedly lovable may well be putting on airs or signaling. And as with any piece of music that has gigantic prestige built up around it (partly due to its reputation for being super-profound and inscrutable), all opinions are probably to be somewhat taken with some suspicion of signaling behavior.

(3) Think of someone who has repeatedly shown herself to be a brilliant, extremely sound thinker. You come to trust her opinions on a wide range of topics. When she says something you find absolutely bizarre or inscrutable, you're going to at a very minimum think carefully about what she says to see if the fault is with you. If you're a fan of most of the music Beethoven writes, I encourage you to give him a similar benefit of the doubt.

(4) I myself find the Great Fugue remarkable but not at all pleasant -- in fact, while Beethoven holds me enraptured right up through the Last Five Sonatas and the Ninth Symphony, he loses me a bit with the Missa Solemnis and the late string quartets, with the exception of a few isolated movements. You're certainly not wrong to suggest that admitting these views in academic music circles is low-prestige (although not as much so as it used to be), but a major factor in this is my point (3) above: Beethoven has generally earned the benefit of the doubt. Also, it's equally low-prestige in those circles to run around gushing about how amazing the Great Fugue is without having some interesting things to say about why you think so.

(5) I am totally baffled why you are so convinced that quality must be something that inheres to a piece of music. Quality is subjective, or at most inter-subjective, and aesthetic judgments do not contain truth value.

(6) Whatever you think you mean by suggesting that the music of Alban Berg (not sure why you picked him) lacks "basic music theory," I can completely guarantee you that you are wrong. Music theory is not a property of musical compositions any more than linguistics is a property of language. If what you mean is that Alban Berg was not a composer of tonal music in the 18th- and 19th-century sense, then that is true, but (a) his music contains structure, just not tonal structure; (b) the relativism of aesthetic judgments means that that is neither a bad thing nor a good thing except insofar as the pleasure some people take in his music is good; and (c) if you are hinting at the claim that people who say they like Alban Berg's music don't actually like it but are just signaling social prestige, then that may be true for some individuals but is false in the general sense.

(7) Liking has a great deal more to do with familiarity than you think it does, and substantial music cognition research backs this up.

(8) It is probably impossible to separate individual aesthetic pleasure from socially-pressured aesthetic pleasure as thoroughly as you want to. (I'm reminded of the famous Judgment of Paris wine-tasting episode (link is to Wikipedia, tinyurl is the only way I could get it not to be broken).) We are social beings, so we should release ourselves from the imagined obligation to make all our aesthetic judgments in a social vacuum. Even the pleasure you take from the things you think you like in the most genuine and uncomplicated way is to some degree socially determined. Liking things is something that we're in many ways primed to do by what we hear from others -- if my best friend recommends me a novel, I'll read it with somewhat more patience knowing that someone whose opinion I value has vouched for it. If in the end I like it, even if I wouldn't have liked it otherwise, there's no reason to think of that liking as being less genuine or less valuable.

(7+8) If you listen to the Great Fugue a hundred more times, unless you find something viscerally unpleasant about it (which, make no mistake, some people really do, since it's pretty loud and screechy), you will probably like it, because familiarity and social conditioning tend to do that to us. If you like it, stop driving yourself crazy and just like it. If you can't stand to like something thinking that there's some element of social conditioning driving you to do so, then by all means stop listening to the Great Fugue.

(9) That said, many people do find that it's interesting or pleasant to expend a little effort to see if they can learn to like something that they don't immediately like but have some reason to think they may like eventually. That's what an acquired taste is. If you give it a shot and it doesn't take, then let yourself off the hook. And you can always take some pleasure in being the aggressive countersignaller who goes around telling anyone who'll listen that the Great Fugue is totally overrated (some people will take a lot more pleasure in that than they ever could in the piece itself (the politest, but by no means only, word for those people is "contrarians")).

Comment author: Yvain 29 July 2011 12:30:20AM 65 points [-]

Upvoted for several reasons:

  • excellent theory about cryonics, much more plausible than things like "people hate cryonics because they're biased against cold" that have previously appeared on here.

  • willingness to acknowledge serious issue. Work is terrible, and the lives of many working people, even people with "decent" jobs in developed countries, are barely tolerable. It is currently socially unacceptable to mention this. Anyone who breaks that silence has done a good deed.

  • spark discussion on whether this will continue into the future. I was reading a prediction from fifty years ago or so that by 2000, people would only work a few hours a day or a few days a week, because most work would be computerized/roboticized and technology would create amazing wealth. Most work has been computerized/roboticized, technology has created amazing wealth, but working conditions are little better, and maybe worse, than they were fifty years ago. A Hansonian-style far future could lead to more of the same, and Hanson even defends this to a degree. In my mind, this is something futurologists should worry about.

  • summary of the article was much better than the article itself, which was cluttered with lots of quotes and pictures and lengthiness. Summaries that are better than the original articles are hard to do, hence, upvote.

Comment author: SilasBarta 18 October 2010 06:22:12PM 67 points [-]

Eliezer Yudkowsky holds the honorary title of Duke Newcomb.

Comment author: lukeprog 14 May 2012 10:07:06AM 65 points [-]

I don't think this response supports your claim that these improvements "would not and could not have happened without more funding than the level of previous years."

I know your comment is very brief because you're busy at minicamp, but I'll reply to what you wrote, anyway: Someone of decent rationality doesn't just "try things until something works." Moreover, many of the things on the list of recent improvements don't require an Amy, a Luke, or a Louie.

I don't even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.

When I was made Executive Director and phoned our Advisors, most of them said "Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!"

That is the kind of thing that makes me want to say that SingInst has "tested every method except the method of trying."

Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping... these are all literally from the Nonprofits for Dummies book.

Maybe these things weren't done for 11 years because SI's decision-makers did make good plans but failed to execute them due to the usual defeaters. But that's not the history I've heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I've heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.

Money wasn't the barrier to doing many of those things, it was a gap in general rationality.

I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.

At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. (And I'm not the only SIer who felt this way at the time.)

But now I do feel comfortable asking people to donate to SingInst. I'm excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.

Comment author: shminux 10 May 2012 06:30:00PM *  58 points [-]

Wow, I'm blown away by Holden Karnofsky, based on this post alone. His writing is eloquent, non-confrontational and rational. It shows that he spent a lot of time constructing mental models of his audience and anticipated its reaction. Additionally, his intelligence/ego ratio appears to be through the roof. He must have learned a lot since the infamous astroturfing incident. This is the (type of) person SI desperately needs to hire.

Emotions out of the way, it looks like the tool/agent distinction is the main theoretical issue. Fortunately, it is much easier than the general FAI one. Specifically, to test the SI assertion that, paraphrasing Arthur C. Clarke,

Any sufficiently advanced tool is indistinguishable from an agent.

one ought to formulate and prove this as a theorem, and present it for review and improvement to the domain experts (the domain being math and theoretical computer science). If such a proof is constructed, it can then be further examined and potentially tightened, giving new insights to the mission of averting the existential risk from intelligence explosion.

If such a proof cannot be found, this will lend further weight to the HK's assertion that SI appears to be poorly qualified to address its core mission.

Comment author: DanielVarga 04 April 2011 09:06:57PM 63 points [-]

It is not really a quote, but a good quip from an otherwise lame recent internet discussion:

Matt: Ok, for all of the people responding above who admit to not having a soul, I think this means that it is morally ok for me to do anything I want to you, just as it is morally ok for me to turn off my computer at the end of the day. Some of us do have souls, though.

Igor: Matt - I agree that people who need a belief in souls to understand the difference between killing a person and turning off a computer should just continue to believe in souls.

Comment author: marchdown 27 November 2010 01:46:12AM 67 points [-]

This last one actually works!

Comment author: Vladimir_M 03 October 2010 10:45:08AM *  26 points [-]

Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)

Comment author: roland 08 March 2009 11:53:18PM 67 points [-]

The reasoning mistake that Yvain and a lot of people here are making is: they think that if someone is scared in a supposedly haunted house there must be a believe in ghosts hidden somewhere inside his brain. What happens in reality is that the mind is hardwired to be scared when certain conditions are met. Being out in the dark is scary, not because you have a believe in ghosts but because there used to be predators roaming about in the ancestral environment and so the brain triggers accordingly. Now this whole ghost issue is probably a post-factum rationalization. Our verbal reasoning just pops out with an explanation of why we are scared(I was scared without a reason so I must have a believe in ghosts somewhere in my mind!). The real reason is below the surface and inacessible because we lack the ability for introspection.

Comment author: ahbwramc 23 October 2014 02:46:23AM 65 points [-]

Survey complete! I'd have answered the digit ratio question, but I don't have a ruler of all things at home. Ooh, now to go check my answers for the calibration questions.

Comment author: westward 18 December 2013 09:05:29PM *  66 points [-]

"Finally, a study that backs up everything I've always said about confirmation bias." -Kslane, Twitter

Link

View more: Prev | Next