Comment author: dripgrind 26 June 2012 01:41:36AM -4 points [-]

[Executive summary: solve the underlying causes of your problem by becoming Pope]

I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.

The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with the minds of millions of others. Before investing time and effort trying to flip her back to the side of reason, you should evaluate the costs and benefits of destroying the Church as an effective entity. I will now outline a method by which you and around 20 like-minded friends could do just that.

The Catholic Church is based in a tiny pseudo-state called Vatican City State. It has no permanent population and no true army, the Swiss Guard being more of a ceremonial bodyguard force (although they do have modern firearms as well as the cool-looking pikes).

What I propose is that you wait until the current Pope dies (not long now!) and a conclave has been assembled, then rush Vatican City in an infantry-style terrorist assault. There are 150 or so of the Swiss Guard but you could divide their forces by having some of you occupy a building, display simulated explosives and make fake demands. Your true targets are the cardinals who are there to elect a new Pope.

Once you capture the cardinals, simply force them at gunpoint to elect you Pope. In the event that you're not already a Bishop and therefore not an eligible candidate for the Papacy, simply mount a privilege escalation attack, whereby you force them to elect you to successively higher offices until you become a valid Pope. I anticipate that this process will be completed before the Italian state can mount an effective special forces operation to kill you.

Now you are Pope, you are the sovereign of the Vatican City State. You can pardon your co-conspirators, and appoint them as ambassadors so they have diplomatic immunity outside VSS. You can then use your papal infallibility to remove all the problematic doctrines of the Church (homophobia, opposition to birth control/abortion, etc.) and bring all the child rapists it has shielded and enabled to justice. Or you could change all Catholic doctrines to those of Pastafarianism. Either way, the appeal of Catholicism to your friend would be destroyed as its so-called timeless moral insights are revealed as human constructs. One or (preferably) more "True" Catholic churches will arise to challenge your claim to the Papacy, causing decades of damaging, hilarious schisms, during which you should make sure to declare several Antipopes.

I suggest you treat this post as if it's a joke, and then seek military training as soon as possible.

Comment author: dripgrind 26 June 2012 01:14:06AM -2 points [-]

[Executive summary: solve the underlying causes of your problem by becoming Pope]

I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.

The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with millions of others. Before investing time and effort trying to flip her back to the side of reason, you should consider whether you could destroy the Church and dam the river of poison at its source. I will now outline a metho

Comment author: handoflixue 29 July 2011 10:45:40PM 3 points [-]

Don't you have eye tests and dental checkups on a precautionary basis?

I tend to view there as being a strong difference between "go for a 2 hour checkup" and "invest $28K in cryonics". I wasn't aware of the pre-emptive breast removals, though, that would definitely qualify as the sort of thing I was looking for - and I still wonder how common it is, amongst people who would benefit.

the fact that you assume people have to invest their own money

I'm not aware of any country whose socialized healthcare pays for cryonics, so cryonics is certainly an out-of-pocket cost. If I'm wrong, please let me know so that I can move ASAP :)

That does make me wonder if cryonics is a harder sell in countries with socialized healthcare, just because people aren't used to having to pay for healthcare at all. The US, at least, is used to the idea of spending money on that scale.

Comment author: dripgrind 30 July 2011 01:21:27AM 3 points [-]

When I said "you assume people have to invest their own money to ensure their health" I was obviously referring to preventative medical interventions, which is what you were actually asking about, not cryonics.

The breast/ovarian cancer risk genes are BRCA 1/2 - I seem to remember reading that half of carriers opt for some kind of preventative surgery, although that was in a lifestyle magazine article called something like "I CUT OFF MY PERFECT BREASTS" so it may not be entirely reliable. I'm sure it's not just a tiny minority who opt for it, though. I'm sure there are better figures on Google Scholar.

If you consider the cost of taking statins from age 40 to 80, in total that's a pricy intervention.

Maybe the lack of people using expensive preventative measures is because few of them exist - or few of them have benefits which outweigh the side-effects/pain/costs - not that people don't want them in general. If there was a pill that cost $30,000 and made you immune to all cancer with no side effects, I'm sure everyone would want it.

I think the real issue is that people don't consider cryonics to be "healthcare". That seems reasonable, because it's a mixture of healthcare and time travel into an unknown future where you might be put in a zoo by robots for all anybody knows.

Comment author: Plasmon 29 July 2011 07:58:20PM 21 points [-]

Wasn't there something similar a while ago? ... yes there was. I can reasonably assume there will be others in the future. You are trying to get people to donate by appealing to an artificial sense of urgency ("Now is your chance to" , "Donate now" ). Beware that this triggers dark arts alarm bells.

Nevertheless, I have now donated an amount of money.

Comment author: dripgrind 29 July 2011 10:36:20PM 92 points [-]

Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.

Comment author: handoflixue 29 July 2011 08:21:38PM 9 points [-]

It seems to me that most life-saving medical procedures are done at the time of need. People tend not to get their appendix removed "as a precaution", and the most preventative care I can think of is an annual visit and vaccinations (and somehow we have managed to get a small segment of the population stupid enough to start protesting even that...)

I have no clue what the numbers are, but how many people actually have a will? A medical directive? Actively engage in preventative care before they have a problem? How many people go so far as to invest a large sum of money in advance, to ensure their health?

The most I've heard of is basic lifestyle changes: exercise more, eat healthy, regular checkups. In a different vein, setting up a will or an advanced medical directive. That's it. I can't think of a single example of someone spending $10,000 today, in order to prevent something ten years down the road.

Comment author: dripgrind 29 July 2011 10:28:02PM 7 points [-]

Women with a high hereditary risk of breast cancer sometimes opt to have both their breasts removed pre-emptively. People take statins and blood pressure drugs for years to prevent heart attacks. Don't you have eye tests and dental checkups on a precautionary basis? There's plenty of preventative medical care.

Maybe the availability and marketing varies between countries - the fact that you assume people have to invest their own money to ensure their health suggests you're from the US or another country with a bad healthcare system. My country has a national health service which takes an interest in encouraging preventative medicines like statins, helping people give up smoking, and so on, since that saves it money overall. I'm sure the allocation of preventative care is far from ideal and shaped by political and social factors and drug company lobbying, but it does exist.

It would be a bad tradeoff to go through painful appendectomy to prevent the small chance that you might get appendicitis (and you can get your appendix removed when it's actually infected, and the appendix may have an evolutionary function acting as a reservoir of gut bacteria, and it can also be used to reconstruct the bladder).

Comment author: Nick_Tarleton 15 July 2011 12:12:07AM 6 points [-]

Just that, according to their belief system

It seems to you that according to their belief system.

they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve).

Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI.

If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions.

Ethical injunctions are not separate values to be traded off against saving the world; they're policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good.

(If you didn't read the posts linked from that wiki page, you should.)

Comment author: dripgrind 15 July 2011 01:02:40AM -3 points [-]

You're right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I'm sure that could all be stirred up with the right fanfiction ("Harry Potter And The Monster In The Chinese Room").

I understand what ethical injunctions are - but would SIAI be bound by them given their apparent "torture someone to avoid trillions of people having to blink" hyper-utilitarianism?

In response to My true rejection
Comment author: Yvain 15 July 2011 12:04:52AM 17 points [-]

If IBM makes a superintelligent AI that wants to maximize their share price, it will probably do something less like invent brilliant IBM products, and more like hack the stock exchange, tell its computers to generate IBM's price by calling on a number in the AI's own memory, and then convert the universe to computronium in order to be able to represent as high a number as possible.

To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would approve of would require solving the friendly AI problem but then changing a couple of lines of code. Part of what SIAI should be (and as far as I know, is) doing is trying to convince people like selfish IBM researchers that making an UnFriendly superintelligence would be a really bad idea even by their own selfish standards.

Another part is coming up with some friendly AI design ideas so that, if IBM is unusually sane and politicians are unusually sane and everyone is sane and we can make it to 2100 without killing ourselves via UnFriendly AI, then maybe someone will have a Friendly AI in the pipeline so we don't have to gamble on making it to 2200.

Also, the first rule of SIAI's assassinate unfriendly AI researchers program is don't talk about the assassinate unfriendly AI researchers program.

In response to comment by Yvain on My true rejection
Comment author: dripgrind 15 July 2011 12:56:25AM 1 point [-]

To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would >approve of would require solving the friendly AI problem but then changing a couple of lines of code.

That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.

Surely it's much harder to make all of humanity happy than to make IBM's stockholders happy? I mean, a FAI that does the latter is far less constrained, but it's still not going to convert the universe into computronium.

In response to My true rejection
Comment author: Normal_Anomaly 15 July 2011 12:31:03AM 12 points [-]

I downvoted you for suggesting in public that the SIAI kill people. Even if that's a good idea, which it probably isn't, the negative PR and subsequent loss of funding from being seen talking about it is such a bad idea that you should definitely not be talking about it on a public website. If you really want the SIAI to kill people, PM or email either 1) the people who would actually be able to make that change to SIAI policy, or 2) people you think might be sympathetic to your position (to have more support when you suggest 1).

Comment author: dripgrind 15 July 2011 12:50:29AM 1 point [-]

I'm not seriously suggesting that. Also, I am just some internet random and not affiliated with the SIAI.

I think my key point is that the dynamics of society are going to militate against deploying Friendly AI, even if it is shown to be possible. If I do a next draft I will drop the silly assassination point in favour of tracking AGI projects and lobbying to get them defunded if they look dangerous.

Comment author: Normal_Anomaly 15 July 2011 12:25:59AM 1 point [-]

If you have a rigorous, detailed theory of Friendliness, you presumably also know that creating an Unfriendly AI is suicide and won't do it. If one competitor in the race doesn't have the Friendliness theory or the understanding of why it's important, that's a serious problem, but I don't see any programmer who understands Friendliness deliberately leaving it out.

Also, what little I know about browser design suggests that, say, supporting the blink tag is an extra chunk of code that gets added on later, possibly with a few deeper changes to existing code. Friendliness, on the other hand, is something built into every part of the system--you can't just leave it out and plan to patch it in later, even if you're clueless enough to think that's a good idea.

Comment author: dripgrind 15 July 2011 12:41:37AM 2 points [-]

OK, what about the case where there's a CEV theory which can extrapolate the volition of all humans, or a subset of them? It's not suicide for you to tell the AI "coherently extrapolate my volition/the shareholders' volition". But it might be hell for the people whose interests aren't taken into account.

In response to My true rejection
Comment author: Nick_Tarleton 14 July 2011 11:42:41PM 5 points [-]

If it's possible to make a Friendly superhuman AI that optimises CEV, then it's surely way easier to make an unFriendly superhuman AI that optimises a much simpler variable, like the share price of IBM.

Yes.

Long before a Friendly AI is developed, some research team is going to be in a position to deploy an unFriendly AI that tries to maximise the personal wealth of the researchers, or the share price of the corporation that employs them, or pursues some other goal that the rest of humanity might not like.

This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also likely.

This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and this is one goal of SIAI's outreach.

you need to (a) lobby against AI research by any groups who aren't 100% committed to Friendly AI (pay off reactionary politicians so AI regulation becomes a campaign issue, etc.)

At the very least, this (or anything that causes lots of people with power/resources to take AI more seriously) has to be weighed against the risk of causing the creation of more serious AGI/"FAI" projects. (I expect communicating enough reasoning to politicians, the general public, etc. to make them able to distinguish between plausible and hopeless "FAI" projects to be basically impossible.)

Also, SIAI is small and has limited resources, and in particular, doesn't have the sort of political connections that would make this worth trying.

Comment author: dripgrind 15 July 2011 12:34:19AM 2 points [-]

This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness >theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also >likely.

This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and >this is one goal of SIAI's outreach.

AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a small startup where researchers are in charge, outreach could be useful. What if they're in a big corporation, and they're under pressure to ignore outside influences? (What kind of organisation is most likely to come up with a super-AI, if that's how it happens?)

If FAI does become a serious concern, nothing would stop corporations from faking compliance but actually implementing flawed systems, just as many software companies put more effort into reassuring customers that their products are secure than actually fixing security flaws.

Realistically, how often do researchers in a particular company come to realise what they're doing is dangerous and blow the whistle? The reason whistleblowers are lionised in popular culture is precisely because they're so rare. Told to do something evil or dangerous, most people will knuckle under, and rationalise what they're doing or deny responsibility.

I once worked for a company which made dangerously poor medical software - an epidemiological study showed that deploying their software raised child mortality - and the attitude of the coders was to scoff at the idea that what they were doing could be bad. They even joked about "killing babies".

Maybe it would be a good idea to monitor what companies are likely to come up with an AGI. If you need a supercomputer to run one, then presumably it's either going to be a big company or an academic project?

View more: Next