Comment author: dripgrind 26 June 2012 01:14:06AM -2 points [-]

[Executive summary: solve the underlying causes of your problem by becoming Pope]

I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.

The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with millions of others. Before investing time and effort trying to flip her back to the side of reason, you should consider whether you could destroy the Church and dam the river of poison at its source. I will now outline a metho

Comment author: handoflixue 29 July 2011 10:45:40PM 3 points [-]

Don't you have eye tests and dental checkups on a precautionary basis?

I tend to view there as being a strong difference between "go for a 2 hour checkup" and "invest $28K in cryonics". I wasn't aware of the pre-emptive breast removals, though, that would definitely qualify as the sort of thing I was looking for - and I still wonder how common it is, amongst people who would benefit.

the fact that you assume people have to invest their own money

I'm not aware of any country whose socialized healthcare pays for cryonics, so cryonics is certainly an out-of-pocket cost. If I'm wrong, please let me know so that I can move ASAP :)

That does make me wonder if cryonics is a harder sell in countries with socialized healthcare, just because people aren't used to having to pay for healthcare at all. The US, at least, is used to the idea of spending money on that scale.

Comment author: dripgrind 30 July 2011 01:21:27AM 3 points [-]

When I said "you assume people have to invest their own money to ensure their health" I was obviously referring to preventative medical interventions, which is what you were actually asking about, not cryonics.

The breast/ovarian cancer risk genes are BRCA 1/2 - I seem to remember reading that half of carriers opt for some kind of preventative surgery, although that was in a lifestyle magazine article called something like "I CUT OFF MY PERFECT BREASTS" so it may not be entirely reliable. I'm sure it's not just a tiny minority who opt for it, though. I'm sure there are better figures on Google Scholar.

If you consider the cost of taking statins from age 40 to 80, in total that's a pricy intervention.

Maybe the lack of people using expensive preventative measures is because few of them exist - or few of them have benefits which outweigh the side-effects/pain/costs - not that people don't want them in general. If there was a pill that cost $30,000 and made you immune to all cancer with no side effects, I'm sure everyone would want it.

I think the real issue is that people don't consider cryonics to be "healthcare". That seems reasonable, because it's a mixture of healthcare and time travel into an unknown future where you might be put in a zoo by robots for all anybody knows.

Comment author: Plasmon 29 July 2011 07:58:20PM 21 points [-]

Wasn't there something similar a while ago? ... yes there was. I can reasonably assume there will be others in the future. You are trying to get people to donate by appealing to an artificial sense of urgency ("Now is your chance to" , "Donate now" ). Beware that this triggers dark arts alarm bells.

Nevertheless, I have now donated an amount of money.

Comment author: dripgrind 29 July 2011 10:36:20PM 92 points [-]

Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.

Comment author: handoflixue 29 July 2011 08:21:38PM 9 points [-]

It seems to me that most life-saving medical procedures are done at the time of need. People tend not to get their appendix removed "as a precaution", and the most preventative care I can think of is an annual visit and vaccinations (and somehow we have managed to get a small segment of the population stupid enough to start protesting even that...)

I have no clue what the numbers are, but how many people actually have a will? A medical directive? Actively engage in preventative care before they have a problem? How many people go so far as to invest a large sum of money in advance, to ensure their health?

The most I've heard of is basic lifestyle changes: exercise more, eat healthy, regular checkups. In a different vein, setting up a will or an advanced medical directive. That's it. I can't think of a single example of someone spending $10,000 today, in order to prevent something ten years down the road.

Comment author: dripgrind 29 July 2011 10:28:02PM 7 points [-]

Women with a high hereditary risk of breast cancer sometimes opt to have both their breasts removed pre-emptively. People take statins and blood pressure drugs for years to prevent heart attacks. Don't you have eye tests and dental checkups on a precautionary basis? There's plenty of preventative medical care.

Maybe the availability and marketing varies between countries - the fact that you assume people have to invest their own money to ensure their health suggests you're from the US or another country with a bad healthcare system. My country has a national health service which takes an interest in encouraging preventative medicines like statins, helping people give up smoking, and so on, since that saves it money overall. I'm sure the allocation of preventative care is far from ideal and shaped by political and social factors and drug company lobbying, but it does exist.

It would be a bad tradeoff to go through painful appendectomy to prevent the small chance that you might get appendicitis (and you can get your appendix removed when it's actually infected, and the appendix may have an evolutionary function acting as a reservoir of gut bacteria, and it can also be used to reconstruct the bladder).

In response to My true rejection
Comment author: Yvain 15 July 2011 12:04:52AM 17 points [-]

If IBM makes a superintelligent AI that wants to maximize their share price, it will probably do something less like invent brilliant IBM products, and more like hack the stock exchange, tell its computers to generate IBM's price by calling on a number in the AI's own memory, and then convert the universe to computronium in order to be able to represent as high a number as possible.

To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would approve of would require solving the friendly AI problem but then changing a couple of lines of code. Part of what SIAI should be (and as far as I know, is) doing is trying to convince people like selfish IBM researchers that making an UnFriendly superintelligence would be a really bad idea even by their own selfish standards.

Another part is coming up with some friendly AI design ideas so that, if IBM is unusually sane and politicians are unusually sane and everyone is sane and we can make it to 2100 without killing ourselves via UnFriendly AI, then maybe someone will have a Friendly AI in the pipeline so we don't have to gamble on making it to 2200.

Also, the first rule of SIAI's assassinate unfriendly AI researchers program is don't talk about the assassinate unfriendly AI researchers program.

In response to comment by Yvain on My true rejection
Comment author: dripgrind 15 July 2011 12:56:25AM 1 point [-]

To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would >approve of would require solving the friendly AI problem but then changing a couple of lines of code.

That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.

Surely it's much harder to make all of humanity happy than to make IBM's stockholders happy? I mean, a FAI that does the latter is far less constrained, but it's still not going to convert the universe into computronium.

In response to My true rejection
Comment author: Normal_Anomaly 15 July 2011 12:31:03AM 12 points [-]

I downvoted you for suggesting in public that the SIAI kill people. Even if that's a good idea, which it probably isn't, the negative PR and subsequent loss of funding from being seen talking about it is such a bad idea that you should definitely not be talking about it on a public website. If you really want the SIAI to kill people, PM or email either 1) the people who would actually be able to make that change to SIAI policy, or 2) people you think might be sympathetic to your position (to have more support when you suggest 1).

Comment author: dripgrind 15 July 2011 12:50:29AM 1 point [-]

I'm not seriously suggesting that. Also, I am just some internet random and not affiliated with the SIAI.

I think my key point is that the dynamics of society are going to militate against deploying Friendly AI, even if it is shown to be possible. If I do a next draft I will drop the silly assassination point in favour of tracking AGI projects and lobbying to get them defunded if they look dangerous.

Comment author: Normal_Anomaly 15 July 2011 12:25:59AM 1 point [-]

If you have a rigorous, detailed theory of Friendliness, you presumably also know that creating an Unfriendly AI is suicide and won't do it. If one competitor in the race doesn't have the Friendliness theory or the understanding of why it's important, that's a serious problem, but I don't see any programmer who understands Friendliness deliberately leaving it out.

Also, what little I know about browser design suggests that, say, supporting the blink tag is an extra chunk of code that gets added on later, possibly with a few deeper changes to existing code. Friendliness, on the other hand, is something built into every part of the system--you can't just leave it out and plan to patch it in later, even if you're clueless enough to think that's a good idea.

Comment author: dripgrind 15 July 2011 12:41:37AM 2 points [-]

OK, what about the case where there's a CEV theory which can extrapolate the volition of all humans, or a subset of them? It's not suicide for you to tell the AI "coherently extrapolate my volition/the shareholders' volition". But it might be hell for the people whose interests aren't taken into account.

In response to My true rejection
Comment author: Nick_Tarleton 14 July 2011 11:42:41PM 5 points [-]

If it's possible to make a Friendly superhuman AI that optimises CEV, then it's surely way easier to make an unFriendly superhuman AI that optimises a much simpler variable, like the share price of IBM.

Yes.

Long before a Friendly AI is developed, some research team is going to be in a position to deploy an unFriendly AI that tries to maximise the personal wealth of the researchers, or the share price of the corporation that employs them, or pursues some other goal that the rest of humanity might not like.

This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also likely.

This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and this is one goal of SIAI's outreach.

you need to (a) lobby against AI research by any groups who aren't 100% committed to Friendly AI (pay off reactionary politicians so AI regulation becomes a campaign issue, etc.)

At the very least, this (or anything that causes lots of people with power/resources to take AI more seriously) has to be weighed against the risk of causing the creation of more serious AGI/"FAI" projects. (I expect communicating enough reasoning to politicians, the general public, etc. to make them able to distinguish between plausible and hopeless "FAI" projects to be basically impossible.)

Also, SIAI is small and has limited resources, and in particular, doesn't have the sort of political connections that would make this worth trying.

Comment author: dripgrind 15 July 2011 12:34:19AM 2 points [-]

This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness >theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also >likely.

This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and >this is one goal of SIAI's outreach.

AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a small startup where researchers are in charge, outreach could be useful. What if they're in a big corporation, and they're under pressure to ignore outside influences? (What kind of organisation is most likely to come up with a super-AI, if that's how it happens?)

If FAI does become a serious concern, nothing would stop corporations from faking compliance but actually implementing flawed systems, just as many software companies put more effort into reassuring customers that their products are secure than actually fixing security flaws.

Realistically, how often do researchers in a particular company come to realise what they're doing is dangerous and blow the whistle? The reason whistleblowers are lionised in popular culture is precisely because they're so rare. Told to do something evil or dangerous, most people will knuckle under, and rationalise what they're doing or deny responsibility.

I once worked for a company which made dangerously poor medical software - an epidemiological study showed that deploying their software raised child mortality - and the attitude of the coders was to scoff at the idea that what they were doing could be bad. They even joked about "killing babies".

Maybe it would be a good idea to monitor what companies are likely to come up with an AGI. If you need a supercomputer to run one, then presumably it's either going to be a big company or an academic project?

In response to My true rejection
Comment author: orthonormal 14 July 2011 11:06:49PM 5 points [-]

Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for several reasons.

Also, I don't think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.

Comment author: dripgrind 15 July 2011 12:06:58AM 0 points [-]

Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.

Comment author: Raemon 14 July 2011 11:16:37PM *  2 points [-]

It kind of seems like at the moment, you mainly want to find post-hoc reasons why the exercise was "useful".

I did use all of those reasons to justify why I thought I should do it beforehand. But I have noticed myself repeating those reasons to make myself feel more justified. (Also possible that my primary motivation in doing so in the first place was the social-skill development one)

In any case, I think your recommendations for how to proceed are good ones.

Comment author: dripgrind 14 July 2011 11:41:00PM 2 points [-]

Another idea - if you can't find someone skilled in market research to do this for you at a discount or free, read a textbook about how to assess potential new brands to help with designing the survey.

View more: Prev | Next