[Executive summary: solve the underlying causes of your problem by becoming Pope]
I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.
The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with millions of others. Before investing time and effort trying to flip her back to the side of reason, you should consider whether you could destroy the Church and dam the river of poison at its source. I will now outline a metho
When I said "you assume people have to invest their own money to ensure their health" I was obviously referring to preventative medical interventions, which is what you were actually asking about, not cryonics.
The breast/ovarian cancer risk genes are BRCA 1/2 - I seem to remember reading that half of carriers opt for some kind of preventative surgery, although that was in a lifestyle magazine article called something like "I CUT OFF MY PERFECT BREASTS" so it may not be entirely reliable. I'm sure it's not just a tiny minority who opt fo...
Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.
To be fair, it's just the heads that rise again, not the rest of the corpse... ah, I'm not helping, am I? :-)
Women with a high hereditary risk of breast cancer sometimes opt to have both their breasts removed pre-emptively. People take statins and blood pressure drugs for years to prevent heart attacks. Don't you have eye tests and dental checkups on a precautionary basis? There's plenty of preventative medical care.
Maybe the availability and marketing varies between countries - the fact that you assume people have to invest their own money to ensure their health suggests you're from the US or another country with a bad healthcare system. My country has a nationa...
You're right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I'm sure that could all be stirred up with the right fanfiction ("Harry Potter And The Monster In The Chinese Room").
I understand what ethical injunctions are - but would SIAI be bound by them given their apparent "torture someone to avoid trillions of people having to blink" hyper-utilitarianism?
I understand what ethical injunctions are - but would SIAI be bound by them given their apparent "torture someone to avoid trillions of people having to blink" hyper-utilitarianism?
If you think ethical injunctions conflict with hyper-utilitarianism, you don't understand what they are. Did you read the posts?
To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would >approve of would require solving the friendly AI problem but then changing a couple of lines of code.
That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.
Surely it's much harder to make all of humanity happy than to make IBM's stockholders happy? I mean, a FAI that does the latter is far less constrained, but it's still not going to convert the universe into computronium.
I'm not seriously suggesting that. Also, I am just some internet random and not affiliated with the SIAI.
I think my key point is that the dynamics of society are going to militate against deploying Friendly AI, even if it is shown to be possible. If I do a next draft I will drop the silly assassination point in favour of tracking AGI projects and lobbying to get them defunded if they look dangerous.
I'm not seriously suggesting that.
I would not make non serious suggestions in a post titled "My True Rejection".
OK, what about the case where there's a CEV theory which can extrapolate the volition of all humans, or a subset of them? It's not suicide for you to tell the AI "coherently extrapolate my volition/the shareholders' volition". But it might be hell for the people whose interests aren't taken into account.
This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness >theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also >likely.
This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and >this is one goal of SIAI's outreach.
AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a s...
Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.
Another idea - if you can't find someone skilled in market research to do this for you at a discount or free, read a textbook about how to assess potential new brands to help with designing the survey.
My point, then, is that as well as heroically trying to come up with a theory of Friendly AI, it might be a good idea to heroically stop the deployment of unFriendly AI.
Oh, I'm not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren't doing it - although their presence wouldn't prove they were. That's the whole idea of "false-flag".
If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, "shut up and multiply", trillion specks, and all that.
I freely admit there are ethical issues with a secret assassination programme. But what's wrong with lobbying politicians to retard the progress of unFriendly AI projects, regulate AI, etc? You could easily persuade conservatives to pretend to be scared about human-level AI on theological/moral/job-preservation grounds. Why not start shaping the debate and pushing the Overton window now?
I do understand what SIAI argues what an unFriendly intelligence would do if programmed to maximize some financial metric. I just don't believe that a corporation in a pos...
Can you give me some references for the idea that "you don't need to have solved the AGI problem to have solved friendliness"? I'm not saying it's not true, I just want to improve this article.
Let's taboo "solved" for a minute.
Say you have a detailed, rigorous theory of Friendliness, but you don't have it implemented in code as part of an AGI. You are racing with your competitor to code a self-improving super-AGI. Isn't it still quicker to implement something that doesn't incorporate Friendliness?
To me, it seems like, even if the theo...
I really don't know what you mean.
Action can be way worse than inaction, if what you end up doing is misleading yourself or doing harm to your cause.
I don't think what you've done is necessarily misleading or harmful, as long as you don't consider it anything more than incomplete, qualitative research into the range of responses the word "rationality" gets from random people.
But you really, really need to decide what the point of this exercise is. Are you trying to gather useful data, or make people feel more positive about rationality, or just get comfortable talking to random p...
Putting up a poll on Livejournal would also constitute "asking real people". Obviously an LJ poll isn't going to deliver a representative sample or actionable information - but then again, neither is asking 9 people who work in your building in New York.
It's definitely a good idea to do this.
But the way you've set about doing it isn't going to produce any worthwhile data.
I'm no expert on branding and market research, but I'm pretty sure that the best practice in the field isn't having conversations with 9 non-random strangers in a lift (asking different leading questions each time) then bunging it in Google Docs and getting other people to add more haphazard data in the hope that someone will make a website that sorts it all out.
First you need to define the question you're asking. Exactly which sub-popu...
Survivors and cult historians alike agree that this post, combined with the founding of the "rationalist boot camps", set in motion the sequence of events which culminated in the tragic mass cryocide of 2024.
At every step, Yudkowsky's words seemed rational to his enthralled followers - and also to all outside observers. And yet, when it became clear that commercial pressures were causing strong AI to be deployed long before Coherent Awesomeness Extrap-volition Theory could be made mathematically rigorous, the cult turned against itself.
One by on...
Erm, maybe my standards are too high, but this didn't seem overwhelmingly well-written as fiction and I really worry when material that attacks a target that's supposed to be attacked gets a free pass as art. Or maybe you all actually enjoyed that, and I'm being unreasonable in expecting blog comments to meet publishable quality standards.
This got a few chuckles from me, but I have found that fiction in which present-day issues escalate implausibly into warfare is a strong indicator and promoter of affective death spirals. You do realize that this story features prominent falsehoods that people actually believe, and is completely absurd in ways not inhereted from the things it's satirizing, right?
So let's get this straight: the Iraqis blew up TWA 800, choosing a date that was symbolic to them, and the US covered it up.
Why the cover up? Going back to your four "reasons for obfuscation":
Because the US was unable to retaliate? - oh no, it was already bombing Iraq and enforcing a no-fly zone at that time. The US just wanted to ignore a terrorist attack by its enemy? Or maybe the Clinton administration wanted to maintain the flexibility to wait for the Iraqis to pull off a much worse terrorist attack, then wait to be voted out out of office, ...
My point wasn't that the reasons aren't "conventional" - it's the fact that he's making a list of things that hadn't happened yet as possible ways to start a war which shows that he was already committed to the invasion no matter what happened.
In fact, none of those things really came to pass (although the Bush administration tried to create the impression that there was a link to 9-11 or anthrax) and yet the invasion still went ahead.
Your conspiracy theory doesn't make a lot of sense. If the US government wanted to hide Iraq's supposed involveme...
It's a good thing that, despite your obvious desire to obtain WMD capability, you're just an AI with no way to control a nuclear weapons factory.
Unless... Clippy, is that Stuxnet worm part of you? 'Fess up.
Just because some institutions over-reacted or implemented ineffective measures, doesn't mean that the concern wasn't proportionate or that effective measures weren't also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed ("Catch it, bin it, kill it").
If anything, the government reaction was insufficient, because the phone system was delayed a...
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how ...
When you say that no one seems to be doing much, are you sure that's not just because the efforts don't get much publicity?
There is a lot that's being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There's an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israe...
Just recently, a piece of evidence has come to light which makes it very hard to believe that the motivation for the war was an honest fear of WMDs.
Rumsfeld wrote talking points for a November 2001 meeting with Tommy Franks containing the section:
"How start?
* Start now thinking about inspection demands."
http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB326/index.htm
In the context of a meeting about planning an ...
I don't think worrying about nuclear war during the Cold War constituted either "crying wolf" or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after "The Fate of the Earth" was published), and various false alert incidents could have resulted in nuclear war, and I'm not sure why anyone who opposed nuclear weapons at the time would be "embarrassed" in the light of what we now know.
I don't think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concer...
I don't know about SARS, but in the case of H1N1 it wasn't "crying wolf" so much as being prepared for a potential pandemic which didn't happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn't become as virulent as expected doesn't mean that preparing for that eventuality was a waste of time.
I don't think you're taking this discussion seriously, and that hurts my feelings. I'm not going to vote your comment down, but I am going to unbend a couple of boxes of paperclips at the office tomorrow.
Before I reply, let's just look at the phrase "WMDs has nothing to do with mass destruction" and think for a while. Maybe we should taboo the phrase "WMD".
Was it supposed to be bad for Saddam to have certain objects merely because they were regulated under the Chemical Weapons Convention, or because of their actual potential for harm?
The justification for the war was that Iraq could give dangerous things to terrorists. Or possibly fire them into Israel. It was the actual potential for harm that was the problem.
Rusty shells with traces ...
The existence of articles on Google which contain the keywords "Saddam syria wmd" isn't enough to establish that Saddam gave all his WMD to Syria.
The articles you Googled are from WorldNetDaily (a news source with a "US conservative perspective"), a New York tabloid, a news aggregator, and a right wing blog. Of course, it would be wrong to dismiss them based on my assumptions about the possible bias of the sources, but on reading them they don't provide much evidence for what you are asserting.
The first three state that various people (...
An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.
I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.
Why do you think that an evil AI would be harder to achieve than a Friendly one?
Here's another possible objection to cryonics:
If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.
"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:
Suppose the Singularity d...
It's not true to say that those shifts took place without any "shift in underlying genetic makeup of population" - there has been significant human evolution over the last 6,000 years during the "shift from agricultural to urban lifestyle".
Of course, this isn't an argument for innatism, since evolution didn't cause the changes in lifestyle, but the common meme that human population genetics are exactly the same today as they were on the savannah isn't true.
I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions.
So your standard of accepting something as evidence is "a 'mainstream source' asserted it and I haven't seen someone contradict it". That seems like you are setting the bar quite low. Especially because we have seen that your claim about the hijackers not being on the passenger manifest was quickly debunked...
Oh, and to try and make this vaguely on topic: say I was trying to do a Bayesian analysis of how likely woozle is to be right. Should I update on the fact that s/he is citing easily debunked facts like "the hijackers weren't on the passenger manifest", as well as on the evidence presented?
I was interested in your defence of the "truther" position until I saw this this litany of questions. There are two main problems with your style of argument.
First, the quality of the evidence you are citing. Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case). Anyone who has read newspaper coverage of something they know about in detail will know that, even in the absence of malice, the coverage is l...
Oh, and to try and make this vaguely on topic: say I was trying to do a Bayesian analysis of how likely woozle is to be right. Should I update on the fact that s/he is citing easily debunked facts like "the hijackers weren't on the passenger manifest", as well as on the evidence presented?
Nick Lane's book The Vital Question has a great discussion of endosymbiosis in terms of metabolism. The point of the book is that all metabolism is powered by a proton gradient. It becomes very inefficient to maintain that in a larger cell, so having smaller subcompartments within a larger cell where metabolism can take place (like mitochondria) is vital for getting bigger. (There are some giant bacteria, but they have unusual metabolic adaptations). I think he also discusses why mitochondria need to retain the key genes for metabolism - I think it's to do with timely regulation.