Comment author: PhilosophyTutor 28 April 2014 12:13:45AM *  1 point [-]

It seems based on your later comments that the premise of marketing worlds existing relies on there being trade-offs between our specified wants and our unspecified wants, so that the world optimised for our specified wants must necessarily be highly likely to be lacking in our unspecified ones ("A world with maximal bananas will likely have no apples at all").

I don't think this is necessarily the case. If I only specify that I want low rates of abortion, for example, then I think it highly likely that 'd get a world that also has low rates of STD transmission, unwanted pregnancy, poverty, sexism and religiousity because they all go together, I think you could specify any one of those variables and almost all of the time you would get all the rest as a package deal without specifying them.

Of course a malevolent AI could probably deliberately construct a siren world to maximise one of those values and tank the rest but such worlds seem highly unlikely to arise organically. The rising tide of education, enlightenment, wealth and egalitarianism lifts most of the important boats all at once, or at least that is how it seems to me.

Comment author: fubarobfusco 02 June 2011 04:15:27AM 10 points [-]

"X is a cult" seems to me to be an unneeded node.

"X worsens its members' rationality about itself" and "X uses state violence to deter criticism of itself" are pretty bad by themselves.

Comment author: PhilosophyTutor 31 October 2012 07:19:46AM 1 point [-]

"Cult" might not be a very useful term given the existing LW knowledge base, but it's a very useful term. I personally recommend Steve Hassan's book "Combating Cult Mind Control" as an excellent introduction to how some of the nastiest memetic viruses propagate and what little we can do about them.

He lists a lengthy set of characteristics which cults tend to have in common which go beyond the mind-controlling tactics of mainstream religions. My fuzzy recollection is that est/Landmark was considered a cult by the people who make it their area of interest to keep track of currently active cults.

In a sense these organisations are the polar opposite of LW. LW attempts to maximise rationality, although not always successfully, and cults attempt to create maximum dependence and control.

Comment author: [deleted] 16 September 2012 05:26:08PM 2 points [-]

I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly.

In response to comment by [deleted] on Rationality Quotes September 2012
Comment author: PhilosophyTutor 26 September 2012 02:22:46PM 0 points [-]

A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly.

You could of course also try to express it in terms of the two people's confidence in related propositions like "protests are effective" or "I am the sort of person who goes to protests". In that case strength would be referring to the existence or nonexistence of related beliefs which together are likely to be action-driving.

Comment author: wedrifid 09 July 2012 06:50:45AM 0 points [-]

Said literature makes statements about what is game-theory-rational. Those statements are only epistemically, instrumentally or normatively bad if you take them to be statements about what is LW-rational or "rational" in the layperson's sense.

Disagree on instrumentally and normatively. Agree regarding epistemically---at least when the works are careful with what claims are made. Also disagree with the "game-theory-rational", although I understand the principle you are trying to get at. A more limited claim needs to be made or more precise terminology.

Comment author: PhilosophyTutor 09 July 2012 08:15:26AM -1 points [-]

I would be interested in reading about the bases for your disagreement. Game theory is essentially the exploration of what happens if you postulate entities who are perfectly informed, personal utility-maximisers who do not care at all either way about other entities. There's no explicit or implicit claim that people ought to behave like those entities, thus no normative content whatsoever. So I can't see how the game theory literature could be said to give normatively bad advice, unless the speaker misunderstood the definition of rationality being used, and thought that some definition of rationality was being used in which rationality is normative.

I'm not sure what negative epistemic or instrumental outcomes you foresee either, but I'm open to the possibility that there are some.

Is there a term you prefer to "game-theory-rational" that captures the same meaning? As stated above, game theory is the exploration of what happens when entities that are "rational" by that specific definition interact with the world or each other, so it seems like the ideal term to me.

Comment author: wedrifid 03 July 2012 04:10:16PM *  7 points [-]

I want to point out that Eliezer's (and LW's general) use of the word 'rationality' is entirely different from the use of the word in the game theory literature

And the common usage of 'rational' on lesswrong should be different to what is used in a significant proportion of game theory literature. Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad. According to the basic principles of the site it is in fact stupid and not-rational to defect against a clone of yourself in a true Prisoner's Dilemma. A kind of stupidity that is not too much different to being 'rational' like Spock.

ETA: Reading Grognor's reply to the parent, it seems that much of the negative affect is due to inconsistent use of the word 'rational(ity)' on LW. Maybe it's time to try yet again to taboo LW's 'rationality' to avoid the namespace collision with academic literature.

No. The themes of epistemic and instrumental rationality are the foundational premise of the site. It is right there in the tagline on the top of the page. I oppose all attempts to replace instrumental rationality with something that involves doing stupid things.

I do endorse avoiding excessive use of the word.

Comment author: PhilosophyTutor 09 July 2012 06:41:28AM -1 points [-]

Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.

Said literature makes statements about what is game-theory-rational. Those statements are only epistemically, instrumentally or normatively bad if you take them to be statements about what is LW-rational or "rational" in the layperson's sense.

Ideally we'd use different terms for game-theory-rational and LW-rational, but in the meantime we just need to keep the distinction clear in our heads so that we don't accidentally equivocate between the two.

Comment author: lukeprog 20 June 2012 08:28:57PM *  8 points [-]

Your first two questions ask about evidence that I already said I'm not in a position to share yet. I know that's unsatisfying, but... are your priors on my claims being true really very low? Famous scientists, especially, are barraged with a few purported unifications of quantum theory and relativity every month, and "Did they bother to pass peer review?" is a pretty useful heuristic for them. When you visualize a busy academic receiving CFAI from one person, and The Singularity and Machine Ethics from somebody else, which one do you think they're more likely to read and take seriously, and why? (Feel free to take this as a rhetorical question.)

A lot of your points are criticisms of blog posts, like "a lot of them don't have citations", or "a lot of them are poorly organized". These are true in many cases. However, if SIAI is considering whether to publish some given idea in paper or blog post form, they could simply spend the (fairly small) effort to write a blog post which was well organized and had citations, thereby making these problems moot.

The effort required may be much larger than you think. Eliezer finds it very difficult to do that kind of work, for example. (Which is why his papers still read like long blog posts, and include very few citations. CEV even contains zero citations, despite re-treading ground that has been discussed by philosophers for centuries, as "The Singularity and Machine Ethics" shows.)

And if you've done all that work, then why not also tweak it for use in a scholarly AI risk wiki, and then combine it with a couple other wiki articles into a paper?

I've heard many stories from academics of authors spending huge amounts of time and effort trying to get stuff published. In the most recent case, which I discussed with a grad student just a few hours ago, it took hundreds of hours, over a full year. If it's usually easy to get around that sort of thing, by just publishing in a different journal, why don't more academics do so?

Because their career depends on satisfying their advisors, or on getting published in particular journals. SI researchers' careers don't depend on investing hundreds of hours making revisions. If publishing in a certain journal is going to require 30 hours of revisions that don't actually improve the paper in our eyes, then we aren't going to bother publishing in that journal.

Comment author: PhilosophyTutor 28 June 2012 12:26:14AM *  -1 points [-]

The effort required may be much larger than you think. Eliezer finds it very difficult to do that kind of work, for example. (Which is why his papers still read like long blog posts, and include very few citations. CEV even contains zero citations, despite re-treading ground that has been discussed by philosophers for centuries, as "The Singularity and Machine Ethics" shows.)

If this is the case, then a significant benefit to Eliezer of trying to get papers published would be that it would be excellent discipline for Eliezer, and would make him an even better scholar.

A benefit that would follow on is that it would establish by example that nobody is above showing their work, acknowledging their debts and being current on the relevant literature. Conceivably Eliezer is such a talented guy that it is of no benefit to him to do these things, but if everyone who thought they were that talented were excused from showing their work and keeping current then progress would slow significantly.

It also avoids reinventing the wheel. No matter how smart Eliezer is, it's always conceivable that someone else thought of something first and expressed it in rigorous detail with proper citations. A proper literature review avoids this waste of valuable research time.

Comment author: gwern 14 May 2012 12:53:45AM *  1 point [-]

Fair enough. I don't think the biases are symmetrical though: these people have a real and life-threatening disease, so they approach any intervention hoping strongly that it will work; hence we should expect them to yield more false positives than false negatives compared to whatever an equal medical trial would yield. On the other hand, when we're looking at the chatrooms of hypochondriacs & aspartame sufferers, I think we can expect the bias to be reversed: if even crazy people find nothing to take offense to in something, that something may well be harmless.

This yields the useful advice that when looking at any results, we should look at whether the participants have an objectively (or at least, third-party) validated problem. If they do, we should pay attention to their nulls but less attention to their claims about what helps. And vice versa. (Can we then apply this to self-experimentation? I think so, but there we already have selection bias telling us to pay little attention to exciting news like 'morning faces help my bipolar', and more attention to boring nulls like 'this did nothing for me'.)

Kind of a moot point I guess, because the fakes do not seem to be well-organized at all.

Comment author: PhilosophyTutor 14 May 2012 01:35:58AM 0 points [-]

I think you're probably right in general, but I wouldn't discount the possibility that, for example, a rumour could get around the ALS community that lithium was bad, and be believed by enough people for the lack of blinding to have an effect. There was plenty of paranoia in the gay community about AZT, for example, despite the fact that they had a real and life-threatening disease, so it just doesn't always follow that people with real and life-threatening diseases are universally reliable as personal judges of effective interventions.

Similarly if the wi-fi "allergy" crowd claimed that anti-allergy meds from a big, evil pharmaceutical company did not help them that could be a finding that would hold up to blinding but then again it might not.

I do worry that some naive Bayesians take personal anecdotes to be evidence far too quickly, without properly thinking through the odds that they would hear such anecdotes in worlds where the anecdotes were false. People are such terrible judges of medical effectiveness that in many cases I don't think the odds get far off 50% either way.

Comment author: Eliezer_Yudkowsky 19 February 2010 06:22:37PM 18 points [-]

You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard.

I dispute none of this, but so far as I can tell or guess, the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get, and PatientsLikeMe has some standard statistical techniques that they use every time.

Also, I presume, PatientsLikeMe is Bayesian or Bayes-like in that they take all available evidence into account and update incrementally, while every medical experiment is a whole new tiny little frequentist universe.

This is not really an article about PatientsLikeMe being strong, it is an article about the standard statistical methods of academic science being weak and stupid.

Comment author: PhilosophyTutor 13 May 2012 09:02:39PM 0 points [-]

What is your evidence for the claim that the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get? What observations have you made that are more likely to be true given that hypothesis?

Comment author: gwern 13 May 2012 08:00:06PM 1 point [-]

Lack of double-blinding ought to increase the false positive rate, right? But the result presented in the OP (the lithium) was a finding of a negative.

Comment author: PhilosophyTutor 13 May 2012 08:45:15PM 0 points [-]

No. Lack of double-blinding will increase the false negative rate too, if the patients, doctors or examiners think that something shouldn't work or should be actively harmful. If you test a bunch of people who believe that aspartame gives them headaches or that wifi gives them nausea without blinding them you'll get garbage out as surely as if you test homeopathic remedies unblinded on a bunch of people who think homeopathic remedies cure all ills.

In this particular case I think it's likely the system worked because it's relatively hard to kid yourself about progressing ALS symptoms, and even with a hole in the blinding sometimes more data is just better. This is about as easy as medical problems get.

Generalising from this to the management of chronic problems seems like a major mistake. There's far, far more scope to fool oneself with placebo effects, wishful thinking, failure to compensate for regression to the mean, attachment to a hypothesis and other cognitive errors with a chronic problem.

Comment author: steven0461 11 May 2012 12:22:24AM *  5 points [-]

Rawls's Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.

Comment author: PhilosophyTutor 11 May 2012 06:25:56AM 0 points [-]

If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don't find it and stop it. Tell me that you think that's an unlikely hypothesis and I'll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis.

However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don't know this worse-off person exists nor that they could do anything about it if that person did exist.

That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it's very hard to avoid the conclusion that we should dedicate 100% of the world's surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms.

This isn't a problem that Bentham's hedonistic utilitarianism, nor Eliezer's gloss on it, handles any more satisfactorily than any other theory as far as I can tell.

View more: Prev | Next