All of dripgrind's Comments + Replies

Nick Lane's book The Vital Question has a great discussion of endosymbiosis in terms of metabolism. The point of the book is that all metabolism is powered by a proton gradient. It becomes very inefficient to maintain that in a larger cell, so having smaller subcompartments within a larger cell where metabolism can take place (like mitochondria) is vital for getting bigger. (There are some giant bacteria, but they have unusual metabolic adaptations). I think he also discusses why mitochondria need to retain the key genes for metabolism - I think it's to do with timely regulation.

0A1987dM
In principle, they're allowed to elect any baptized male (you'll be ordained bishop right after the election if you're not already one) , though it's been centuries since the last time the new pope wasn't already a cardinal. (Don't ask me what happens if they elect a married man.)
dripgrind-10

[Executive summary: solve the underlying causes of your problem by becoming Pope]

I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.

The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with millions of others. Before investing time and effort trying to flip her back to the side of reason, you should consider whether you could destroy the Church and dam the river of poison at its source. I will now outline a metho

When I said "you assume people have to invest their own money to ensure their health" I was obviously referring to preventative medical interventions, which is what you were actually asking about, not cryonics.

The breast/ovarian cancer risk genes are BRCA 1/2 - I seem to remember reading that half of carriers opt for some kind of preventative surgery, although that was in a lifestyle magazine article called something like "I CUT OFF MY PERFECT BREASTS" so it may not be entirely reliable. I'm sure it's not just a tiny minority who opt fo... (read more)

dripgrind1330

Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.

3tenshiko
I thought that we'd pretty much ditched the beheading part precisely for that reason?
4katydee
Best blog comment I've ever seen.
DSimon320

To be fair, it's just the heads that rise again, not the rest of the corpse... ah, I'm not helping, am I? :-)

Women with a high hereditary risk of breast cancer sometimes opt to have both their breasts removed pre-emptively. People take statins and blood pressure drugs for years to prevent heart attacks. Don't you have eye tests and dental checkups on a precautionary basis? There's plenty of preventative medical care.

Maybe the availability and marketing varies between countries - the fact that you assume people have to invest their own money to ensure their health suggests you're from the US or another country with a bad healthcare system. My country has a nationa... (read more)

4handoflixue
I tend to view there as being a strong difference between "go for a 2 hour checkup" and "invest $28K in cryonics". I wasn't aware of the pre-emptive breast removals, though, that would definitely qualify as the sort of thing I was looking for - and I still wonder how common it is, amongst people who would benefit. I'm not aware of any country whose socialized healthcare pays for cryonics, so cryonics is certainly an out-of-pocket cost. If I'm wrong, please let me know so that I can move ASAP :) That does make me wonder if cryonics is a harder sell in countries with socialized healthcare, just because people aren't used to having to pay for healthcare at all. The US, at least, is used to the idea of spending money on that scale.
dripgrind-40

You're right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I'm sure that could all be stirred up with the right fanfiction ("Harry Potter And The Monster In The Chinese Room").

I understand what ethical injunctions are - but would SIAI be bound by them given their apparent "torture someone to avoid trillions of people having to blink" hyper-utilitarianism?

I understand what ethical injunctions are - but would SIAI be bound by them given their apparent "torture someone to avoid trillions of people having to blink" hyper-utilitarianism?

If you think ethical injunctions conflict with hyper-utilitarianism, you don't understand what they are. Did you read the posts?

To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would >approve of would require solving the friendly AI problem but then changing a couple of lines of code.

That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.

Surely it's much harder to make all of humanity happy than to make IBM's stockholders happy? I mean, a FAI that does the latter is far less constrained, but it's still not going to convert the universe into computronium.

9Scott Alexander
Not really. "Maximize the utility of this one guy" isn't much easier than "Maximize the utility of all humanity" when the real problem is defining "maximize utility" in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying "Maximize the utility of this one guy here who's clearly very nice and wants what's best for humanity." There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that's not where the majority of the problem lies. I'd even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it'd be a net win for humanity, compared to our current prospects. With absolute power there's not a lot of incentive to be an especially malevolent dictator (see Moldbug's Fhnargl thought experiment for something similar) and in a post-scarcity world there'd be more than enough for everyone including IBM executives. It'd be sub-optimal, but compared to Unfriendly AI? Piece of cake.
7JGWeissman
It is more work for the AI to make all of humanity happy than a smaller subset, but it is not really more work for the human development team. They have to solve the same Friendliness problem either way. For a greatly scaled down analogy, I wrote a program that analyzes stored procedures in a database and generates web services that call those stored procedures. I run that program on our database which currently has around 1800 public procedures, whenever we make a release. Writing that program was the same amount of work for me as if there were 500 or 5000 web services to generate instead of 1800. It is the program that has to do more or less work if there are more or fewer procedures.

I'm not seriously suggesting that. Also, I am just some internet random and not affiliated with the SIAI.

I think my key point is that the dynamics of society are going to militate against deploying Friendly AI, even if it is shown to be possible. If I do a next draft I will drop the silly assassination point in favour of tracking AGI projects and lobbying to get them defunded if they look dangerous.

I'm not seriously suggesting that.

I would not make non serious suggestions in a post titled "My True Rejection".

OK, what about the case where there's a CEV theory which can extrapolate the volition of all humans, or a subset of them? It's not suicide for you to tell the AI "coherently extrapolate my volition/the shareholders' volition". But it might be hell for the people whose interests aren't taken into account.

4falenas108
At that point, that particular company wouldn't be able to build the AI any faster than other companies, so at that point it's just a matter of getting an FAI out there first and have it optimize rapidly enough that it could destroy any UFAI that come along after.

This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness >theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also >likely.

This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and >this is one goal of SIAI's outreach.

AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a s... (read more)

-2timtyler
Simpler to have infratructure to monitor all companies: corporate reputation systems.

Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.

1orthonormal
That's not how the term is used here. Friendliness is prior to and separate from CEV, if I understand it correctly. From the CEV document:

Another idea - if you can't find someone skilled in market research to do this for you at a discount or free, read a textbook about how to assess potential new brands to help with designing the survey.

dripgrind-10

My point, then, is that as well as heroically trying to come up with a theory of Friendly AI, it might be a good idea to heroically stop the deployment of unFriendly AI.

0timtyler
Very large organisations do sometimes attempt to cut of their competitors' air supply. They had better make sure they have good secrecy controls if they don't want it to blow up in their faces.
dripgrind-10

Oh, I'm not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren't doing it - although their presence wouldn't prove they were. That's the whole idea of "false-flag".

If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, "shut up and multiply", trillion specks, and all that.

8Nick_Tarleton
It seems to you that according to their belief system. Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI. Ethical injunctions are not separate values to be traded off against saving the world; they're policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good. (If you didn't read the posts linked from that wiki page, you should.)

I freely admit there are ethical issues with a secret assassination programme. But what's wrong with lobbying politicians to retard the progress of unFriendly AI projects, regulate AI, etc? You could easily persuade conservatives to pretend to be scared about human-level AI on theological/moral/job-preservation grounds. Why not start shaping the debate and pushing the Overton window now?

I do understand what SIAI argues what an unFriendly intelligence would do if programmed to maximize some financial metric. I just don't believe that a corporation in a pos... (read more)

Can you give me some references for the idea that "you don't need to have solved the AGI problem to have solved friendliness"? I'm not saying it's not true, I just want to improve this article.

Let's taboo "solved" for a minute.

Say you have a detailed, rigorous theory of Friendliness, but you don't have it implemented in code as part of an AGI. You are racing with your competitor to code a self-improving super-AGI. Isn't it still quicker to implement something that doesn't incorporate Friendliness?

To me, it seems like, even if the theo... (read more)

1Normal_Anomaly
If you have a rigorous, detailed theory of Friendliness, you presumably also know that creating an Unfriendly AI is suicide and won't do it. If one competitor in the race doesn't have the Friendliness theory or the understanding of why it's important, that's a serious problem, but I don't see any programmer who understands Friendliness deliberately leaving it out. Also, what little I know about browser design suggests that, say, supporting the blink tag is an extra chunk of code that gets added on later, possibly with a few deeper changes to existing code. Friendliness, on the other hand, is something built into every part of the system--you can't just leave it out and plan to patch it in later, even if you're clueless enough to think that's a good idea.

I really don't know what you mean.

dripgrind120

Action can be way worse than inaction, if what you end up doing is misleading yourself or doing harm to your cause.

I don't think what you've done is necessarily misleading or harmful, as long as you don't consider it anything more than incomplete, qualitative research into the range of responses the word "rationality" gets from random people.

But you really, really need to decide what the point of this exercise is. Are you trying to gather useful data, or make people feel more positive about rationality, or just get comfortable talking to random p... (read more)

2Raemon
I did use all of those reasons to justify why I thought I should do it beforehand. But I have noticed myself repeating those reasons to make myself feel more justified. (Also possible that my primary motivation in doing so in the first place was the social-skill development one) In any case, I think your recommendations for how to proceed are good ones.

Putting up a poll on Livejournal would also constitute "asking real people". Obviously an LJ poll isn't going to deliver a representative sample or actionable information - but then again, neither is asking 9 people who work in your building in New York.

dripgrind230

It's definitely a good idea to do this.

But the way you've set about doing it isn't going to produce any worthwhile data.

I'm no expert on branding and market research, but I'm pretty sure that the best practice in the field isn't having conversations with 9 non-random strangers in a lift (asking different leading questions each time) then bunging it in Google Docs and getting other people to add more haphazard data in the hope that someone will make a website that sorts it all out.

First you need to define the question you're asking. Exactly which sub-popu... (read more)

-1JackEmpty
Upvoted for having a very good point, downvoted for being a dick, then upvoted again for having attempted to edit out dickishness :D
7Raemon
I actually mostly agree with you. I hesitated a long time before posting this because I didn't think I had enough/the-right-kind of work done to justify sharing. But ultimately, the reason I posted it is the same reason I still think it's a good idea: Action is better than inaction, and a big problem I think people in our demographic face is overthinking and underdoing. Michaelos' recent post in another thread strikes me as very true. (It may not, in fact, be true, but it definitely matches up with other things I know). If I'm taking actions to solve a problem, I can learn from my mistakes, get feedback and try new approaches. (Thank you for your feedback, by the way.) There are already half-baked efforts to "expand the rationality movement" underway. A half-baked attempt to figure out if that's even the right goal is not ideal, but I think it's better than nothing. I didn't spend otherwise important, productive time doing this. I was converting useless time in an elevator into: 1) Some new information about what people think about rationality 2) Some new information about how to ask people questions and get productive answers 3) Practice at talking to random people in general 4) Practice talking about rationality without evangelizing (yes, I realize I didn't do a great job at it, but it's something that I can only improve at with practice) (I didn't see the definition as important so I could start deliberately evangelizing, but so that if the conversation went in a particular direction we'd have something ready to say) I DID spend "potential productive" time writing up this report and setting up the google doc, but that was time that taught me how to write up a Less Wrong post and your feedback has given me things to think about to improve for next time, so thank you for that. We talked about hiring real researchers at our meetups. We didn't end up doing it, mostly because from everything we knew, the official channels to do so were expensive and we had no id
0wedrifid
That is even worse thinking about quantum suicide and further still from likely Eliezer beliefs. Eliezer endures criticism for being too liberal with his mocking of certain beliefs about QM, of which the one you are relying on is a part.
dripgrind110

Survivors and cult historians alike agree that this post, combined with the founding of the "rationalist boot camps", set in motion the sequence of events which culminated in the tragic mass cryocide of 2024.

At every step, Yudkowsky's words seemed rational to his enthralled followers - and also to all outside observers. And yet, when it became clear that commercial pressures were causing strong AI to be deployed long before Coherent Awesomeness Extrap-volition Theory could be made mathematically rigorous, the cult turned against itself.

One by on... (read more)

Erm, maybe my standards are too high, but this didn't seem overwhelmingly well-written as fiction and I really worry when material that attacks a target that's supposed to be attacked gets a free pass as art. Or maybe you all actually enjoyed that, and I'm being unreasonable in expecting blog comments to meet publishable quality standards.

This got a few chuckles from me, but I have found that fiction in which present-day issues escalate implausibly into warfare is a strong indicator and promoter of affective death spirals. You do realize that this story features prominent falsehoods that people actually believe, and is completely absurd in ways not inhereted from the things it's satirizing, right?

7Normal_Anomaly
Upvoted for amusement value.
8hwc
In other words, the “Special Committee” will result in slow evaporative cooling?
8Raemon
I voted this both up (for cleverness) and down (for distracting from actually important discussion).

So let's get this straight: the Iraqis blew up TWA 800, choosing a date that was symbolic to them, and the US covered it up.

Why the cover up? Going back to your four "reasons for obfuscation":

Because the US was unable to retaliate? - oh no, it was already bombing Iraq and enforcing a no-fly zone at that time. The US just wanted to ignore a terrorist attack by its enemy? Or maybe the Clinton administration wanted to maintain the flexibility to wait for the Iraqis to pull off a much worse terrorist attack, then wait to be voted out out of office, ... (read more)

-2Mitchell_Porter
Let's review some history. 1990: Iraq invades Kuwait, leading eventually to war with the US. Feb 27, 1991: Iraq withdraws from Kuwait and a ceasefire is negotiated. End of 1992: a new US president; the military victor in Kuwait was defeated at home. Feb 28, 1993 (anniversary of the withdrawal from Kuwait, more or less): World Trade Centre bombed. The mastermind, Ramzi Yousef, gets away. Mid-1993: The US destroys the headquarters of Iraqi intelligence in Baghdad, claiming this is in retaliation for a plot to kill former president Bush. Jan 1995: Yousef is accidentally captured in the Philippines while working on Operation Bojinka, a plot to blow up a dozen planes in midair. One month later, the CIA had a man in northern Iraq, working with Chalabi's INC on a plan to overthrow Saddam (but the NSC back in the US aborted the plan at the last moment). Mid-1996: Yousef is on trial in NYC. July 1996: a plane blows up over NYC, just as in Bojinka, killing everyone on board. August 1996: the Iraqi Army goes north and drives the INC out of Iraqi Kurdistan. What that says to me is that the Clinton administration thought Iraq was behind the 1993 WTC bombing, and behind Yousef's terror campaign, but they didn't want to say this in public. Instead, they tried to deal with the Iraq problem covertly and through other means. As to why Iraq would bomb a plane during the trial of their agent, I'd call it intimidation: don't bring up the connection, or else we will wage guerrilla war inside your own borders. Quoting Richard Clarke's book (chapter 5): "... both Ramzi Yousef and Terry Nichols had been in the city of Cebu on the same days ... Nichols's bombs did not work before his Philippine stay and were deadly when he returned. We also know that Nichols continued to call Cebu long after his wife returned to the United States. The final coincidence is that several al Qaeda operatives had attended a radical Islamic conference a few years earlier in, of all places, Oklahoma City." (

My point wasn't that the reasons aren't "conventional" - it's the fact that he's making a list of things that hadn't happened yet as possible ways to start a war which shows that he was already committed to the invasion no matter what happened.

In fact, none of those things really came to pass (although the Bush administration tried to create the impression that there was a link to 9-11 or anthrax) and yet the invasion still went ahead.

Your conspiracy theory doesn't make a lot of sense. If the US government wanted to hide Iraq's supposed involveme... (read more)

-6Mitchell_Porter

It's a good thing that, despite your obvious desire to obtain WMD capability, you're just an AI with no way to control a nuclear weapons factory.

Unless... Clippy, is that Stuxnet worm part of you? 'Fess up.

Just because some institutions over-reacted or implemented ineffective measures, doesn't mean that the concern wasn't proportionate or that effective measures weren't also being implemented.

In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed ("Catch it, bin it, kill it").

If anything, the government reaction was insufficient, because the phone system was delayed a... (read more)

Well, you also need to factor in the severity of the threat, as well as the risk of it happening.

Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.

So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.

Given how ... (read more)

0mattnewport
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.

When you say that no one seems to be doing much, are you sure that's not just because the efforts don't get much publicity?

There is a lot that's being done:

Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There's an international effort to track fissile material.

After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).

South Africa had nuclear weapons, then gave them up.

Israe... (read more)

0timtyler
Plus we invented the internet - greatly strengthening international relations - and creating social and economic interdependency.

Just recently, a piece of evidence has come to light which makes it very hard to believe that the motivation for the war was an honest fear of WMDs.

Rumsfeld wrote talking points for a November 2001 meeting with Tommy Franks containing the section:

"How start?

  • Saddam moves against Kurds in north?
  • US discovers Saddam connection to Sept. 11 attacks or to anthrax attacks?
  • Dispute over WMD inspections?
    * Start now thinking about inspection demands."
    

http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB326/index.htm

In the context of a meeting about planning an ... (read more)

-2Mitchell_Porter
In fact, the list of reasons offered for war in this memo are quite "conventional". First item: The US and Iraq were still in a formal state of war, with Iraq still under the UN economic siege and being bombed regularly. The Kurdish north of Iraq had been a no-fly zone for Iraqi aircraft for years. If the Iraqi Army had moved north, even before 9/11, it would have been the occasion for war or serious combat. Second item: Of course, if Iraq had been found assisting 9/11 or the anthrax letters, that would have provided a reason for war. Third item: There were no UN weapons inspectors in Iraq as of 2001. They were all withdrawn in 1998, prior to "Operation Desert Fox", in which many supposed weapons sites were bombed, possibly in conjunction with a failed coup attempt. (The American legal basis for instigating regime change in Iraq, the Iraq Liberation Act, was created just a few months before.) A post-9/11 dispute over WMD inspections would have been, first of all, a dispute about getting inspectors back into Iraq. Having said a few sane and verifiable things, now I want to add a big-picture comment that may sound, and may even be, rather more dubious. I spent a long time, back in the day, trying to figure out what was actually going on with respect to Iraq. The model I ended up with was a sort of forbidden hybrid of left- and right-wing conspiracy theory, according to which Iraq was involved in al Qaeda's attacks on America and perhaps also the anthrax letters (that's the "right-wing" part), and that this was known or suspected by the US executive branch ever since the first attempt to destroy the World Trade Centre (February 1993), but that they actively hid this from the American public (that's the "left-wing" part). In a further extension of the hypothesis or outlook, this was not a unique situation. For example, the terrorist wing of Aum Shinrikyo (which released nerve gas in the Tokyo subway in 1995) was full of North Korean agents. But there was nothing t
dripgrind110

I don't think worrying about nuclear war during the Cold War constituted either "crying wolf" or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after "The Fate of the Earth" was published), and various false alert incidents could have resulted in nuclear war, and I'm not sure why anyone who opposed nuclear weapons at the time would be "embarrassed" in the light of what we now know.

I don't think an existential risk has to be a certainty for it to be worth taking seriously.

In the US, concer... (read more)

5Perplexed
I agree. And nuclear war was certainly a risk that was worth taking seriously at the time. However, that doesn't make my last sentence any less true, especially if you replace "embarrassed" with "exhausted". The risk of a nuclear war, somewhere, some time within the next 100 years, is still high - more likely than not, I would guess. It probably won't destroy the human race, or even modern technology, but it could easily cost 400 million human lives. Yet, in part because people have become tired of worrying about such things, having already worried for decades, no one seems to be doing much about this danger.

I don't know about SARS, but in the case of H1N1 it wasn't "crying wolf" so much as being prepared for a potential pandemic which didn't happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn't become as virulent as expected doesn't mean that preparing for that eventuality was a waste of time.

0wnoise
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual. And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe: http://www.cdc.gov/h1n1flu/qa.htm Wikipedia takes a more middle-of-the-road view, noting that it's not entirely clear how much transmission happens in which route, but still: http://en.wikipedia.org/wiki/Influenza Which really suggests to me that hand-washing (or sanitizing) just isn't going to be terribly effective. The best preventative is making sick people stay home. Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
1mattnewport
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It's difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources. It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I've pointed to some examples of what look like over-confident predictions of disaster (there's lots more in The Rational Optimist). I'm not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
dripgrind-10

I don't think you're taking this discussion seriously, and that hurts my feelings. I'm not going to vote your comment down, but I am going to unbend a couple of boxes of paperclips at the office tomorrow.

5Clippy
You're a bad human.
dripgrind150

Before I reply, let's just look at the phrase "WMDs has nothing to do with mass destruction" and think for a while. Maybe we should taboo the phrase "WMD".

Was it supposed to be bad for Saddam to have certain objects merely because they were regulated under the Chemical Weapons Convention, or because of their actual potential for harm?

The justification for the war was that Iraq could give dangerous things to terrorists. Or possibly fire them into Israel. It was the actual potential for harm that was the problem.

Rusty shells with traces ... (read more)

5Clippy
I don't think that's enough for clear communication on this issue. People have different views about which kinds of weapons are bad, and for what reason, and what the implications of this badness are. So, the most constructive thing to do at this point would be for each participant to spell out exactly which weapon production methods (be specific!) you would classify as a "WMD". Explain its functionality, the difficult parts in making them, and how a terrorist or government would go abount procuring those parts. Once you've explained exactly how these so-called "WMDs" are produced can we come to any agreement about who's correct regarding Saddam Hussein and the Iraq War.
dripgrind200

The existence of articles on Google which contain the keywords "Saddam syria wmd" isn't enough to establish that Saddam gave all his WMD to Syria.

The articles you Googled are from WorldNetDaily (a news source with a "US conservative perspective"), a New York tabloid, a news aggregator, and a right wing blog. Of course, it would be wrong to dismiss them based on my assumptions about the possible bias of the sources, but on reading them they don't provide much evidence for what you are asserting.

The first three state that various people (... (read more)

0Servant
"The last link says that US found 500 degraded chemical artillery shells from the 1980s which were too corroded to be used but might still have some toxicity. They don't sound like something that could actually be used to cause mass destruction." So just because it doesn't seem to cause mass destruction according to you, it therefore ISN'T a WMD? WMDs has nothing to do with mass destruction. According to the US government and international law, WMD (mosly) means: "nuclear, chemical, and biological weapons." That's it. This weapon is classified as a chemical weapon under the Chemical Weapons Convention, so by that definition, Saddam had WMDs. Source: http://www.nti.org/f_wmd411/f1a1.html EDIT: Though for the most part, I was called to attention that "WMDs" may have no definiton at all, and instead people use the words NCB instead, for clarification . Also, the source points out that there are new types of WMDs such as conventional weapons and radiological weapons.

An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.

I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

5humpolec
Agreed, AI based on a human upload gives no guarantee about its values... actually right now I have no idea about how Friendliness of such AI could be ensured. Maybe not harder, but less probable - 'paperclipping' seems to be a more likely failure of friendliness than AI wanting to torture humans forever. I have to admit I haven't thought much about this, though.

Here's another possible objection to cryonics:

If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.

"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:

Suppose the Singularity d... (read more)

3humpolec
What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

It's not true to say that those shifts took place without any "shift in underlying genetic makeup of population" - there has been significant human evolution over the last 6,000 years during the "shift from agricultural to urban lifestyle".

Of course, this isn't an argument for innatism, since evolution didn't cause the changes in lifestyle, but the common meme that human population genetics are exactly the same today as they were on the savannah isn't true.

dripgrind110

I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions.

So your standard of accepting something as evidence is "a 'mainstream source' asserted it and I haven't seen someone contradict it". That seems like you are setting the bar quite low. Especially because we have seen that your claim about the hijackers not being on the passenger manifest was quickly debunked... (read more)

dripgrind120

Oh, and to try and make this vaguely on topic: say I was trying to do a Bayesian analysis of how likely woozle is to be right. Should I update on the fact that s/he is citing easily debunked facts like "the hijackers weren't on the passenger manifest", as well as on the evidence presented?

5LucasSloan
Yes. A bad standard of accepting evidence causes you to lose confidence in all of the other evidence.

I was interested in your defence of the "truther" position until I saw this this litany of questions. There are two main problems with your style of argument.

First, the quality of the evidence you are citing. Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case). Anyone who has read newspaper coverage of something they know about in detail will know that, even in the absence of malice, the coverage is l... (read more)

-3woozle
I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions. The "Wikipedia standard" seems to work pretty well, though -- didn't someone do a study comparing Wikipedia's accuracy with Encyclopedia Britannica's, and they came out about even? I wasn't intending to be snide; I apologize if it came across that way. I meant it sincerely: Jack found an error in my work, which I have since corrected. I see this as a good thing, and a vital part of the process of successive approximation towards the truth. I also did not cite the 6 living hijackers as a "killer anomaly" but specifically said it didn't seem to be worth worrying about -- below the level of my "anomaly filter". Just as an example of my thought-processes on this: I haven't yet seen any evidence that the "living hijackers" weren't simply people with the same names as some of those ascribed to the hijackers. I'd need to see some evidence that all (or most) of the other hijackers had been identified as being on the planes but none of those six before thinking that there might have been an error... and even then, so what? If those six men weren't actually on the plane, that is a loose end to be explored -- why did investigators believe they were on the plane? -- but hardly incriminating. I verify when I can, but I am not paid to do this. This is why my site (issuepedia.org) is a wiki: so that anyone who finds errors or omissions can make their own corrections. I don't know of any other site investigating 9/11 which provides a wiki interface, so I consider this a valuable service (even if nobody else seems to). The idea that this is unlikely is one I have seen repeatedly, and it makes sense to me: if someone came at me with a box-cutter, I'd be tempted to laugh at them even if I wasn't responsible for a plane-load of passengers -- and I've n
dripgrind120

Oh, and to try and make this vaguely on topic: say I was trying to do a Bayesian analysis of how likely woozle is to be right. Should I update on the fact that s/he is citing easily debunked facts like "the hijackers weren't on the passenger manifest", as well as on the evidence presented?