I always just want to know: how do you propose to naturalize utilitarianism, thus showing your normative questions to actually be factual ones, thus showing that your normative claims are in fact grounded?
I would like to write an essay about that eventually, but I figured persuading PUs of the merits of HU was lower hanging fruit.
For what it's worth, I have a lot of sympathy with your scepticism - I would rather (and believe it possible to) build a system resembling ethics up without reference to normativity, 'oughts', or any of their associated baggage. I think the trick will be to properly understand the overlap of ethics and epistemology, both of which are subject to similar questions (how do we non question-beggingly 'ground' 'factual' questions?), but the former of whose questions people disproportionately emphasise.
[ETA] It's also hard to pin down what the null hypothesis would be. Calling it 'nihilism' of any kind is just defining the problem away. For eg, if you just decide you want to do something nice for your friend - in the sense of something beneficial for her, rather than just picking an act that will give you warm fuzzies - then your presumption of what category of things would be 'nice for her' implicitly judges how to group states of the world. If you also feel like some things you might do would be nicer for her than others, then you're judging how to order states of the world.
This already has the makings of a 'moral system', even though there's not a 'thou shalt' in sight. If you further think that how she'll react to whatever you do for her can corroborate/refute your judgement of what things are nice(r than others) for her, your system seems to have, if not a 'realist' element, at least a non purely antirealist/subjectivist one. It's not utilitarianism (yet), but it seems to be heading in that sort of direction.
It is, although I found this
"People who aren't familiar with Derren Brown or other expert human-persuaders sometimes think this must have been very difficult for Yudkowsky to do or that there must have been some sort of special trick involved,"
amusing, as Derren Brown is a magician. When Derren Brown accomplishes a feat of amazing human psychology, he is usually just cleverly disguising a magic trick.
How do we know EY isn't doing the same?
‘A charity that very efficiently promoted beauty and justice’ would still be a utilitarian charity (where the form of util defined utility as beauty and justice), so if that’s not EA, then EA does not = utilitarianism, QED.
Also, as Ben Todd and others have frequently pointed out, many non-utilitarian ethics subsume the value of happiness. A deontologist might want more happiness and less suffering, but feel that he also had a personal injunction against violating certain moral rules. So long as he didn’t violate those codes, he might well want to maximise efficient use of welfare.
I'd guess these effects are largely not causation, but correlation caused by conscientiousness/ambition causing both double majors and higher earnings.
Unless you're certain of this or have some reason to suspect a factor pulling in the other direction, this still seems to suggest higher expectation from doing a double major.
You know, "politics is the mindkiller" is not only about the conventional meaning of the word "politics". It is about tribes and belonging. Right now you are conflicted as a member of two tribes, and you may feel pressured to choose your loyalty, and protect your status in the selected tribe. Which is not a good epistemic state.
Now on the topic:
Cryonics uses up far more resources [than cancer treatment]
Do we have any specific numbers here? I think the values for "cancer treatment" would depend on the exact kind of treatment and also how long the patient survives, but I don't have an estimate.
If cryonics works, [family and friends] still suffer the same [grief].
Wrong alief. Despite saying "if cryonics works" the author in the rest of the sentence still expects that it does not. Otherwise, they would also include the happiness of family and friends after the frozen person is cured. That is what "if cryonics works" means.
Expressed this way, it is like saying (for a conventional treatment of a conventional disease) that whether doctors can or cannot cure the disease there is no difference, because either way family and friends suffer grief for having the person taken to the hospital. Yes, they do. But in one case, the person also returns from the hospital. That's the whole point of taking people to hospitals, isn't it?
trying to integrate [cryonics] better into society uses up time and resources that could have been spent on higher expectation activities
Technically, by following this argument, we also should stop curing cancer, because that money could also be used for Givewell charities and animal welfare. Suddenly, this argument does not sound so appealing. Why? I guess because cryonics is far; curing a cancer (your, or in your family) is near; and Givewell charities are also far but less so than cryonics. Removing a near suffering feels more important than removing a far suffering. That's human; but let's not pretend that we did a utilitarian calculation here, if we actually used a completely different decision procedure.
...but you already said that.
I think that this discussion is mostly a waste of time, simply because your opponent's true rejection seems to be "cryonics does not work". And then all is written under this alief. Under this alief the arguments make sense: if the cryonics does not work, of course wasting money on cryonics is stupid. But instead of saying this openly, there is a rationalization about why utilitarians should do this and shouldn't do that, by pretending that we have numbers that prove "utility(cancer cure) > utility(animal welfare) > utility(cryonics)". Also, when discussing cryonics, you are supposed to be a perfect utilitarian and willing to sacrifice your life for someone else's greater benefit, but you are allowed to make a selfish exception from perfect utilitarianism when curing your cancer.
For me, the only interesting argument was the one that a smart human in a pre-Singularity world is more useful than a smart human in a post-Singularity world, therefore curing smart people now is more useful than freezing them and curing them in future.
Written a full response to your comments on Felicifia (I'm not going to discuss this in three different venues), but...
your opponent's true rejection seems to be "cryonics does not work"
This sort of groundless speculation about my beliefs (and its subsequent upvoting success), a) in a thread where I’ve said nothing about them, b) where I’ve made no arguments to whose soundness the eventual success/failure of cryo would be at all relevant, and c) where the speculator has made remarks that demonstrate he hasn’t even read the arguments he’s dismissing (among other things a reductio ad absurdum to an ‘absurd’ conclusion which I’ve already shown I hold), does not make me more confident that the atmosphere on this site supports proper scepticism.
Ie you're projecting.
I'm with you, 90% seems too high given the evidence he cites or any evidence I know of.
Assuming you accept the reasoning, 90% seems quite generous to me. What percentage of complex computer programmes when run for the first time exhibit behaviour the programmers hadn't anticipated? I don't have much of an idea, but my guess would be close to 100. If so, the question is how likely unexpected behaviour is to be fatal. For any programme that will eventually gain access to the world at large and quickly become AI++, that seems (again, no data to back this up - just an intuitive guess) pretty likely, perhaps almost certain.
For any parameter of human comfort (eg 253 degrees Kelvin, 60% water, 40 hour working weeks), a misplaced decimal point misplaced by seems like it would destroy the economy at best and life on earth at worst.
If Holden’s criticism is appropriate, the best response might be to look for other options rather than making a doomed effort to make FAI – for example trying to prevent the development of AI anywhere on earth, at least until we can self-improve enough to keep up with it. That might have a low probability of success, but if FAI has sufficiently low probability, it would still seem like a better bet.
Seems like a decent reply overall, but I found the fourth point very unconvincing. Holden has said 'what he knows know' - to wit that whereas the world's best experts would normally test a complicated programme by running it, isolating out what (inevitably) went wrong by examining the results it produced, rewriting it, then doing it again.
Almost no programmes are glitch free, so this is at best an optimization process and one which - as Holden pointed out - you can't do with this type of AI. If (/when) it goes wrong the first time, you don't get a second chance. Eliezer's reply doesn't seem to address this stark difference between what experts have been achieving and what SIAI is asking them to achieve.
Hm. Interesting piece. I'm partially sold, but not on this: 'Further, I see little difference between how a Muslim "chooses" to get upset at disrespect to Mohammed, and how a Westerner might "choose" to get upset if you called eir mother a whore.'
I'm pretty content to call that a sort of choice, especially if you make it a fair comparison, ie a general remark not victimising one person that all mothers are whores. After all, there’s still a pretty big difference between that (or even the rather more inflammatory ‘all Western mothers are whores’), and (a sincerely offensive) ‘your mother is a whore’. One is basically bullying someone, assuming they’re not in a position to hurt you back equally; the other is the sort of casual prejudice that (cough) some of us discourage but don’t actually seek to ban.
On top of that, there’s a significant difference between drawing a picture of someone and drawing a picture of someone in a way calculated to piss people who like them off. In the Muhammad cartoons furore, it initially seemed to be Muslims who were trying to elide the difference – specifically by positioning the latter as very bad and the first as (almost) equally bad. If drawing the former is a political action against such a sentiment (or just an aesthetic statement, standing against those who’d repress a portrayal of something they thought was beautiful), then I hardly think it’s a reprehensible one. Here I think actual 'whores' - or rather porn stars - give a better analogy. Their portrayals offend a lot of people, but few sensible people think there’s a good argument for banning them a) because overturning our anti-censorship sentiments should require a pretty strong burden of evidence and b) because a lot of people very much like them, and why should they be deprived? After all, the naysayers choose not to look at something that exists, but the fans can’t do the reverse.
Lastly, (and leastly), there’s the question of accuracy of the original criticism. If your mother does sell herself for money, then, while victimising you for it is still pretty unpleasant, we would be more inclined to tolerate borderline cases of people pointing it out in a potentially offensive way than if it weren't true. But most of the times when someone’s mum is aggressively called a whore, she probably isn’t. On the other hand, by most accounts Muhammad was a brutal sex pest, who most likely would have ordered suicide bombings had the technology existed for him to do so.
I don’t know how relevant improv is to Less Wrongers, but I find it helpful for everyday social interactions, so:
Primary recommendation: Salinsky & Frances-White’s The Improv Handbook.
Reason It’s one of the only improv books which actually suggests physical strategies for you to try out that might improve your ability rather than referring to concepts that the author has a pet phrase for that they use as a substitute for explaining what it means. Not all of the suggetions worked for me, and they’re based on primarily on anecdotal evidence (plus the selection effect of the authors having run a reasonably successful improv group in the hostile London climate and only then written a book), but I know of no other book that has as constructive an approach. It also has a number of interview sections and similar, which are eminently skippable – only half the book is really worth reading for performance advice, but fortunately the table of contents make it pretty clear which half that is.
I’m recommending it over Keith Johnstone’s ‘Impro’ and ‘Impro for Storytellers’, whose ideas it incorporates, breaks down and structures far better, over Chris Johnston’s ‘The Improvisation Game’, which is an awful mishmash of interviews and turgid academic writing, over Charna Halpern’s ‘Truth in Comedy’, which has quite a different set of ideas but spends more time boasting about how good they are than explaining them, over Jimmy Carrane and Liz Allen’s Improvising Better, which has a few nice tips and is mercifully short, but doesn’t have anything close to a coherent set of principles, ‘The Improvisation Book’, which I haven’t read in depth but seems to be little more than a list of games, and Dan Patterson and Mark Leveson’s ‘Whose Line is It Anyway’, which unsurprisingly is very heavily focused on emulating the restrictive format of the show of the same name.
Secondary recommendation: Mick Napier’s Improvise, which comes from a different school of thought to TIH’s – the same one as ‘Truth in Comedy’.
Reason It's the only one of any of those I’ve mentioned (TIH included) to explicitly suggest scientific reasoning in developing and assessing improv methods. After the author’s initial proclamation to that effect, he doesn’t really communicate how he’s tried to do so, and his advice seems to assume you’re already quite comfortable with being in an unspecified scene with no preset rules (one of the hardest things for an improviser to find himself in, IME), so I wouldn’t recommend it as a beginner’s guide.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Good thing Bayesians don't need to identify the null hypothesis.
Upvoted for mentioning that ethics and epistemology are subject to similar questions. That's a huge insight, familiar in academic philosophy, but AFAICT rare among self-identified rationalists and little discussed on lesswrong.
Whatever you call it, they've got to identify some alternative, even if only tacitly by following some approximation of it in their daily life.