The Art of Lawfare and Litigation strategy
Bertrand Russell, well aware there were health risks of smoking, defended his addiction in a videotaped interview. See if you can spot his fallacy!
Today on SBS (radio channel in Australia) I heard reporters breaking the news that Nature article reports that Cancer is largely due to choices. I was shocked by what appeared to be gross violations of cultural norms around the blaming of victims. I wanted to investigate further since science reporting is notoriously inaccurate.
The BBC reports:
Earlier this year, researchers sparked a debate after suggesting two-thirds of cancer types were down to luck rather than factors such as smoking.
The new study, in the journal Nature, used four approaches to conclude only 10-30% of cancers were down to the way the body naturally functions or "luck".
"They can't smoke and say it's bad luck if they have cancer."
-Dr Yusuf Hannun, the director of Stony Brook
The BBC article is roughly concordant with the SBS report.
I've had a fairly simple relationship with cigarettes. I've smoked others' cigarettes a few times, while drinking. I bought my first cigarette to try soon after I turned of age and discarded the rest of the packet. One of my favourite memories is trying a vanilla flavoured cigar. I still feel tempted to it again whenever I smell a nice scent, or think about that moment. Though now, I regularly reject offers to go to local venues and smoke hookah. Even after my first cigarette, I felt the tug of nicotine and tobacco. Though, I'm unusually sensitive to eve the mildest addictive substances, so that doesn't suprise me in respective. What does suprise me, is that society is starting to take a ubiquitous but increasingly undeniable health issue seriously despite deep entanglement with long standing way of doing things, political ideologues, individual addictions and addiction-driven political behaviour and shareholder's pockets.
Though the truth claim of the article isn't that suprising. The dangers of smoking are publicised everywhere. Emphasis mine:
13 die every day in Victoria as a result of smoking.
Tobacco use (which includes cigarettes, cigars, pipes, snuff, chewing tobacco) is the leading preventable cause of death and illness in our country. It causes more deaths annually than those killed by AIDS, alcohol, automobile accidents, murders, suicides, drugs and fires combined.
So I decided to learn more about the relationship between society and big tobacco, and government and big tobacco to see what other people interested in influencing public policy and public health can learn (effective altruism policy analytics, take note!) about policy tractability in suprising places.
Here's what might make for tractable public policy for public health interventions
Proof of concept
Governments are great at successfully suing the shit out of tobacco. And, big tobacco takes it like a champ:
It started with United State's states experimenting with suing big tobacco. Eventually only a couple of states hadn't done it. Big Tobacco and all those attorney generals gathered and arranged huge ass settlement that resulted in the disestablishment of several shill research institutes supporting big tobacco and big payouts to sponsor anti-smoking advocacy groups (which seem politically unethical, but consequentially good, but I suppose that's a different story). However, what's important to note here is the experimentation within US states culminating with the legitimacy of normative lawfare. It's called 'Diffusion theory' and is described here.
Wait wait wait. I know what you're thinking, non-US LessWrongers - another US centric analysis that isn't too transportable. No. I'm not American in any sense, it's just that the US seems to be a point of diffusion. What's happening regarding marajuana in the US now seems to mirror this in some sense, but it's ironically pro-smoking. That illustrates the cause-neutrality of this phenomenon.
That settlement wasn't the end of the lawfare:
On August 17, 2006, a U.S. district judge issued a landmark opinion in the government's case against Big Tobacco, finding that tobacco companies had violated civil racketeering laws and defrauded consumers by lying about the health risks of smoking.
In a 1,653 page ruling, the judge stated that the tobacco industry had deceived the American public by concealing the addictive nature of nicotine plus had targeted youth in order to get them hooked on cigarettes for life. (Appeals are still pending).
Victims who ask for help
I also stumbled upon some smokers attitudes to smoking and their, well, seemingly vexacious attitudes to big tobacco when looking up lawsuits and big tobacco. Here's a copy of the comments section on one website. It's really heartbreaking. It's a small sample size but just note their education too - suggesting a socio-economic effect. Note, this comments were posted publicly and are blatant cries for help. This suggests political will at a grassroots level that is yet under-catered for by services and/or political action. That's a powerful thing, perhaps - visible need in public forums addressed to those that are in the relevant space. Note that they commented on a class action website.
http://s10.postimg.org/61h7b1rp5/099090.png

Note some of the language:
"I feel like I'm being tortured"
You don't see that kind of language used in any effective altruism branded publications.
Villains
Somewhat famous documents exposing the tobacco industries internal motivations and dodginess seem to be quoted everywhere in websites documenting and justifyications of lawfare against the tobacco industry. Public health and personal dangers of smoking don't seem to have been the big catalyst, but rather a villainous enemy. I'm reminded of how the Stop the boats campaign which villainised people smugglers instead of speaking of the potential to save lives of refugees who fall overboard shitty vessals. I think to Open Borders campaigners associated with GiveWell's Open Philanthropy Project, the perception of the project as just about the most intractable policy prospect around (I'd say a moratorium on AI research is up there), but at the same time, non identification of a villain in the picture. That's not entirely unsuprising. I recall the hate I received when I suggested that people should consider prostituting themselves for effective altruism, or soliciting donations from the porn industry where donors struggle to donate since many, particularly relgious charities refuge to accept their donations. Likewise, it's hard to get rid of encultured perceptions of what's good and what's bad, rather then enumerating ('or checking, as Eleizer writes in the sequence) the consequences.
Relative merit
This is something Effective Altruist is doing.
William Savedoff and Albert Alwang recently identified taxes on tobacco as, “the single most cost-effective way to save lives in developing countries” (2015, p.1).
...
Tobacco control programs often pursue many of these aims at once. However, raising taxes appears to be particularly cost-effective — e.g., raising taxes costs $3 - $70 per DALY avoided(Savedoff and Alwang, p.5; Ranson et al. 2002, p.311) — so I will focus solely on taxes. I will also focus only on low and middle income countries (LMICs) because that is where the problem is worst and where taxes can do the most good most cost-effectively.
..
But current trends need not continue. We can prevent deaths from tobacco use. Tobacco taxation is a well-tested and effective means of decreasing the prevalence of smoking—it gets people to stop and prevents others from starting. The reason is that smokers are responsive to price increases,provided that the real price goes up enough
...
Even if these numbers are off by a factor of 2 or 3, tobacco taxation appears to be on par with the most effective interventions identified by GiveWell and Giving What We Can. For example, GiveWell estimates that AMF can prevent a death for $3340 by providing bed nets to prevent malaria and estimates the cost of schistosomiasis deworming at $29 - $71 per DALY.
There are a few reasons to balk at recommending tobacco tax advocacy to those aiming to do the most good with their donations, time, and careers.
- Tobacco taxes may not be a tractable issue
- Tobacco taxes may be a “crowded” cause area
- Unanswered questions about the empirical basis of cost-effectiveness estimates
...
- There may not be a charity to donate to
Smoking is very harmful and very common. Globally, 21% of people over 15 smoke (WHO GHO)
-https://www.givingwhatwecan.org/post/2015/09/tobacco-control-best-buy-developing-world/
Attributing public responsibility AND incentivising independently private interest in a cause
The Single Best Health Policy in the World: Tobacco Taxes
The single most cost-effective way to save lives in developing countries is in the hands of developing countries themselves: raising tobacco taxes. In fact, raising tobacco taxes is better than cost-effective. It saves lives while increasing revenues and saving poor households money when their members quit smoking.
-http://www.cgdev.org/publication/single-best-health-policy-world-tobacco-taxes)
Tobacco lawsuits can be hard to win but if you have been injured because of tobacco or smoking or secondary smoke exposure, you should contact an attorney as soon as possible.
If you have lung cancer and are now, or were formerly, a smoker or used tobacco products, you may have a claim under the product liability laws. You should contact an experienced product liability attorney or a tobacco lawsuit attorney as soon as possible because a statute of limitations could apply.
There's a whole bunch of legal literature like this: http://heinonline.org/HOL/LandingPage?handle=hein.journals/clqv86&div=45&id=&page=
that I don't have the background to search for and interpret. So, if I'm missing important things, perhaps it's attributable to that. Point them out please.
So that's my analysis: plausible modifiable variables that influence the tractability of the public health policy initiative:
(1) Attributing public responsibility AND incentivising independently private interest in a cause
(2) Relative merit
(3) Villains
(4) Victims that ask for help
(5) Low scale proof of concept
Remember, lawfare isn't just the domain of governments. Here's an example of non-government lawfare for public health. They are just better resourced, often, than individuals. They need groups to advocate on their behalf. Perhaps that's a direction the Open Philanthropy Project could take.
I want to finish by soliciting an answer on the following question that is posed to smokers in a recurring survey by a tobacco control body:
Do you support or oppose the government suing tobacco companies to recover health care costs caused by tobacco use?
Now, there may be some 'reverse causation' at play here for why Tobacco Control has been so politically effect. BECAUSE it's such a good cause, it's a low hanging fruit that's already being picked.
What's the case for or against this?
The case for it's cause selection: Tobacco control
Importance: high
tobacco is the leading preventable cause of death and disease in both the world (see: http://www.who.int/nmh/publications/fact_sheet_tobacco_en.pdf) and Australia (see: http://www.cancer.org.au/policy-and-advocacy/position-statements/smoking-and-tobacco-control/)
‘Tobacco smoking causes 20% of cancer deaths in Australia, making it the highest individual cancer risk factor. Smoking is a known cause of 16 different cancer types and is the main cause of Australia’s deadliest cancer, lung cancer. Smoking is responsible for 88% of lung cancer deaths in men and 75% of lung cancer cases in women in Australia.’
Tractable: high
The World Health Organization’s Framework Convention on Tobacco Control (FCTC) was the first public health treaty ever negotiate.
Based on private information, the balance of healthcare costs against tax revenues according to health advocates compared to treasury estimates in Australia may have been relevant to Australia’s leadership in tobacco regulation. That submission may or may not be adequate in complexity (ie. taking into account reduced lifespans impact on reduced pension payouts for instance). There is a good article about the behavioural economics of tobacco regulation here (http://baselinescenario.com/2011/03/22/incentives-dont-work/)
Room for advocacy: low
There are many hundreds of consumer support and advocacy groups, and cancer charities across Australia.
Room for employment: low?
Room for consulting: high
The rigour of analysis and achievements themselves in the Cancer Council of Australia annual review is underwhelming, as is the Cancer Council of Victoria’s annual report. There is a better organised body of evidence relating to their impact on their Wiki pages about effective interventions and policy priorities. At a glance, there appears to be room for more quantitative, methodologically rigorous and independent evaluation. I will be looking at GiveWell to see what I recommendations can be translated. I will keep records of my findings to formulate draft guidelines for advising organisations in the Cancer Councils’ positions which I estimate by vague memory of GiveWell’s claims are in the majority in the philanthropic space.
Is Pragmatarianism (Tax Choice) Less Wrong?
I sure think it is! But I could be wrong...
This is my first article/post? here and to be honest, I have this website open in another tab and I keep refreshing it to see if I still have enough points to post. I wish I would have taken a screenshot every time my karma changed. First it was 0, then it was -1, then it was back to 0, then I think it jumped up to 5. I thought I was safe but then this morning it was down to 0. So if this post seems "linky" then it might be because I'm trying to share as much information as I can while my window of opportunity is still open.
Pragmatarianism (tax choice) is the belief that taxpayers should be able to choose where their taxes go. Tax choice is the broad concept while pragmatarianism is my own personal spin on it... but sometimes I use "tax choice" when I mean pragmatarianism. Eh, at this point I don't think it's a big deal. Really the only thing nice about the word "pragmatarianism" is that it functions as a unique ID... which is extremely helpful when it comes to searches. Don't have to worry about wading through irrelevant results.
Here are some links from my blog which should help you decide whether pragmatarianism is more or less wrong...
Pragmatarianism FAQ - a good place to start. It's pretty short.
Key concepts - a work in progress. Some of the concepts are linked to entries which have PDF files with a bunch of relevant quotes and passages. If you like any of them then please share them in this thread... Quotes Repository. I shared a few but they didn't fare so well... so I'm guessing that most people here aren't fans of economics... or they aren't fans of my economics.
Progress as a Function of Freedom - hedging bets, the impossibility of hostile aliens, the problem with "rights".
What Do Coywolves, Mr. Nobody, Plants And Fungi All Have In Common? - the universal drive to choose the most valuable option, the carrying model as an explanation for our intelligence, a bit on rationality.
Builderism - where better options come from, globalization, debunking Piketty, eliminating poverty.
My Robin Hanson trilogy...
Is Robin Hanson's Path To Efficient Voting Pragmatic Or Brilliant Or Both? - maybe we should have a civic currency?
Rescuing Robin Hanson From Unmet Demand - how many other people are in the same boat?
Futarchy vs Pragmatarianism - is it logically inconsistent to support one but not the other?
/trilogy.
AI Box Experiment vs Xero's Rule - my first brainstorm attempt to wrap my mind around the idea of an AI box.
Is A Procreation License Consistent With Libertarianism? - would a procreation license be less wrong?
Why I Love Your Freedom - my critique of the best critique of libertarianism. A bit on rationality.
So what do you think? Am I in the right place?
What else? Of course I'm an atheist! And I love sci-fi... and for sure I want to live forever. The major obstacle is that too many people fail to grasp that progress depends on difference. I do my best to try and eliminate this obstacle. Unfortunately I suck at writing and my drawings are even worse. Oh well.
Let me know if you have any questions.
[LINK] U.S. Views of Technology and the Future
I just found this on slashdot:
"U.S. Views of Technology and the Future - Science in the next 50 years" by the Pew Research Center
This report emerges from the Pew Research Center’s efforts to understand public attitudes about a variety of scientific and technological changes being discussed today. The time horizons of these technological advances span from today’s realities—for instance, the growing prevalence of drones—to more speculative matters such as the possibility of human control of the weather.
This is interesting esp. in comparison to the recent posts on forecasting which focussed on expert forecasts.
What I found most notable was the public opinion on their use of future technology:
% who would do the following if possible...
50% ride in a driverless car
26% use brain implant to improve memory or mental capacity
20% eat meat grown in a lab
Don't they know Eutopia is Scary? I'd guess if these technologies really become available and are reliable only the elderly will be inable to overcome their preconceptions. And everybody will eat artificial meat if it is cheaper, more healthy and tastes the same (and the testers say confirm this).
When what is rational is not what is "right"
A little background: I have an above average commute to work and make use of the time by listening to public radio. I have been doing this for just over a year without "doing my part" and contributing. The primary justification of this has been that my commute path and times have me listening to three different public radio stations. I never could decide which station to pledge; which needed it more, which I liked best, which had the least annoying pledge breaks.
The other day, during a pledge break, they played a promo by Ira Glass which went something like:
I'm going to say something that has never happened in a pledge break before. We don't need your money. You do not have to call. There is no evidence to back that up. Every year we say you have to pledge and give your money or we will go away, but year after year, we are still here, even though you didn't pledge.
You should call because its the right thing to do. You like public radio, enough to listen to a pledge break, so you should pledge, not because it is logical but because it is right...
This struck a note with me. Perhaps because of my recent attention here at LW (does that count as focus bias?). It brought two LW relevant questions to mind.
If pledging public radio is the right thing to do, but all of the evidence suggests I personally do not have to pledge, what rational algorithm achieves that outcome? It is not like you can make a 'lives saved per dollar' figure for NPR, it is either there or not. I guess in a really poorly funded station, one might be able to come up with a figure for minutes of programming per dollar. Does doing the "right thing" simply produce a warm feeling? Or is it more like I should pledge because everyone should pledge, similar to "I should tell the truth so no one lies to me"?
Would pledging public radio make a good metric for the friendliness of an AI? Obviously not an unchangeable line of code that says "pledge NPR", but an AI that decides becoming a member of KQED is a good thing to do. I'm sure there are plenty of other situations that are similar like donating to open source software that you use or paying to park in the state forest parking lot instead of parking on the street and walking in for free. It might seem silly, but an AI that chooses to become a member of the local public radio will probably also choose not killing everyone over some increase in another utility function.
Just letting alcoholics drink
"Wet houses"-- subsidized housing for alcoholics (they need to get most of their own money for alcohol, but their other expenses are covered) might actually be a good idea. It's cheaper than trying to get them to stop drinking, arguably kinder than trying to get people to take on a very hard task that they aren't interested in, and leads to less collateral damage than having alcoholics couch-surfing or living on the street.
Utilitarians, what do you think?
How to improve the public perception of the SIAI and LW?
I was recently thinking about the possibility that someone with a lot of influence might at some point try to damage LessWrong and the SIAI and what preemptive measures one could take to counter it.
If you believe that the SIAI does the most important work in the universe and if you believe that LessWrong serves the purpose of educating people to become more rational and subsequently understand the importance of trying to mitigate risks from AI, then you should care about public relations, you should try to communicate your honesty and well-intentioned motives as effectively as possible.
Public relations are very important because a good reputation is necessary to do the following:
- Making people read the Sequences.
- Raising money for the SIAI.
- Convincing people to take risks from AI seriously.
- Allowing the SIAI to influence other AGI researchers.
- Mitigating future opposition by politicians and other interest groups.
- Being no easy target for criticism.
An attack scenario
First one has to identify characteristics that could potentially be used to cast a damaging light on this community. Here the most obvious possibility seems to be to portray the SIAI, together with LessWrong, as a cult.
After some superficial examination an outsider might conclude the following about this community:
- Believing into heaven and hell in the form of a positive or negative Singularity.
- Discouraging skepticism while portraying their own standpoint as clear-cut.
- Encouraging to take ideas seriously.
- Encouraging and signaling strong cooperation and conformity.
- Evangelizing by scaring people and telling them to donate money.
- Social pressure by employing a reputation system with positive and negative incentive.
- Removing themselves from empirical criticism by framing everything as a prediction.
- Discrediting mainstream experts while placing themselves a level above them.
- Discouraging transparency and openness by referring to the dangers of AI research.
- Using scope insensitivity and high-risk to justify action, outweigh low probabilities and disregard opposing evidence.
Most of this might sound wrong to the well-read LessWrong reader. But how would those points be received by mediocre rationalists who don't know what you know, especially if eloquently summarized by a famous and respected person?
Preemptive measures
How one might counter such conclusions:
- Create an introductory guide to LessWrong.
- Explain why the context of the Sequences is important.
- Explain why LessWrong differs from mainstream skepticism.
- Enable and encourage outsiders to challenge and question the community before turning against it.
- Discourage the downvoting of people who have not yet read the Sequences.
- Don't expect people to read hundreds of posts without supporting evidence that it is worth it.
- Avoid jargon when talking to outsiders.
- Detach LessWrong from the SIAI by creating an additional platform to talk about related issues.
- Ask or pay independent experts to peer-review.
- Make the finances of the SIAI easily accessible.
- Openly explain why and for what the SIAI currently needs more money.
So what do you think needs improvement and what would you do about it?
Link: why training a.i. isn’t like training your pets
As the SIAI is gaining publicity more people are reviewing its work. I am not sure how popular this blog is but judged by its about page he writes for some high-profile blogs. His latest post takes on Omohundro's "Basic AI Drives":
When we last looked at a paper from the Singularity Institute, it was an interesting work asking if we actually know what we’re really measuring when trying to evaluate intelligence by Dr. Shane Legg. While I found a few points that seemed a little odd to me, the broader point Dr. Legg was perusing was very much valid and there were some equations to consider. However, this paper isn’t exactly representative of most of the things you’ll find coming from the Institute’s fellows. Generally, what you’ll see are spanning philosophical treatises filled with metaphors, trying to make sense out of a technology that either doesn’t really exist and treated as a black box with inputs and outputs, or imagined by the author as a combination of whatever a popular science site reported about new research ideas in computer science. The end result of this process tends to be a lot like this warning about the need to develop a friendly or benevolent artificial intelligence system based on a rather fast and loose set of concepts about what an AI might decide to do and what will drive its decisions.
Link: worldofweirdthings.com/2011/01/12/why-training-a-i-isnt-like-training-your-pets/
I posted a few comments but do not think to be the right person to continue that discussion. So if you believe it is important what other people think about the SIAI and want to improve its public relations, there is your chance. I'm myself interested in the answers to his objections.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
(I've written the following text as a comment initially, but upon short reflection I thought it was worth a separate topic and so I adapted it accordingly.)
Lesswrong is largely concerned with teaching rationality skills, but for good reasons most of us also incorporate concepts like the singularity and friendly self-improving AGI into our "message". Personally I wonder however, if we should be as outspoken about that sort of AGI as we currently are. Right now talking about self-improving AGI doesn't pose any kind of discernible harm, because "outsiders" don't feel threatened by it and look at it as far-off —or even impossible— science fiction. But as time progresses, I worry that through exponential advances in robotics and other technologies people will become more aware, concerned and perhaps threatened by self-improving AGI and I am not sure whether we should be outspoken about things like... the fact that the majority of AGI's in "mind-design-space" will tear humanity to shreds if its builders don't know what they're doing. Right now such talk is harmless, but my message here is, that we may want to reconsider whether or not we should talk publicly about such topics in the not-too-distant future, so as to avoid compromising our chances of success when it comes to actually building a friendly self-improving AGI.
First off, I suspect I have a somewhat different conception of how the future is going to pan out in terms of what role the public perception and acceptance of self-improving AGI will play: Personally I'm not under the impression, that we can prepare a sizable portion of the public (let alone the global public) for the arrival of AGI (prepare them in a positive manner that is). I believe singularitarian ideas will just continue to compete with countless other worldviews in the public meme-sphere, without ever becoming truly mainstream until it is "too late" and we face something akin to a hard takeoff and perhaps lots of resistance.
I don't really think that we can (or need to) reach a consensus within the public for the successful takeoff of AGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren't some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I would certainly prefer to face apathy over antagonism when push comes to shove - and since self-improving AGI could spring into existence very rapidly and take everyone apart from "those in the know" by surprise, I would hate to lose that element of surprise over our potentially numerous "enemies".
Now of course I don't know which path will yield the best result: confronting the public or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will sport widely diverging opinions, because we simply lack a way to accurately model (especially so far in advance) how people will behave upon encountering the reality and the potential threat of AGI. Just remember, that the world doesn't consist entirely of the US and that AGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer's or whoever's garage. This is why I would strongly advocate a semi-covert international effort when it comes to the development of friendly AGI. (Don't say that it's self-improving and may become a trillion times smarter than all humans combined - just pretend it's roughly a human-level AI).
It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and AGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop self-improving AGI together. (Think CERN). To be honest, I can't even think of any other approach to develop the later stages of AGI, that doesn't look doomed from the start (not doomed in the sense of being technically unfeasible, but doomed in terms of significant others thinking: "we're not letting this suspicious organization/country take over the world with their dubious AI". Remember that self-improving AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)
So from my point of view, the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture if managed correctly. "People" don't get a say in weapons development, and I predict they won't get a say when it comes to Self-improving AGI. (And we should be glad they don't if you ask me.) But in order to not risk public outcry when the time is ripe and AGI in its last stages of completion, we should give serious consideration to not upset and terrify the public by our... "vision of the future".
PS: Somehow CERN comes to mind again. Do you remember when critics came up with ridiculous ideas how the LHC could destroy the world? It was a very serious allegation, but the public largely shrugged it off - not because they had any idea of course, but because they were reassured by enough eggheads that it wouldn't happen. It would be great, if we could achieve a similar reaction towards AGI-criticism (by which I mean generic criticism of course, not useful criticism - after all we actually want to be as sure about how the AGI will behave, as we were sure about the LHC not destroying the world). Once robots become more commonplace in our lives, I think we can reasonably expect that people will begin to place their trust into simple AI's - and they will hopefully become less suspicious towards AGI and simply assume (like a lot of current AI-researchers apparently) that somehow it is trivial to make it behave friendly towards humans.
So what do you think? Should we become more careful when we talk about self-modifying artificial intelligence? I think the "self-modifying"- and "trillions of times smarter"-parts are some bitter pills to swallow, and people won't be amused once they realize that we aren't just building artificial humans but artificial, allpowerful, allknowing, and (hopefully) allloving gods.
EDIT: 08.07.11
PS: If you can accept that argument as rationally sound, I believe a discussion about "informing everyone vs. keeping a low profile" is more than warranted. Quite frankly though, I am pretty disappointed with most people's reactions to my essay this far... I'd like to think that this isn't just my ego acting up, but I'm sincerely baffled as to why this essay usually hovers just slightly above 0 points and frequently gets downvoted back to neutrality. Perhaps it's because of my style of writing (admittedly I'm often not as precise and careful with my wording as many of you are), or my grammar mistakes due to me being German, but preferably that would be because of some serious rational mistakes I made and of which I am still unaware... in which case you should point them out to me.
Presumably not that many people have read it, but in my eyes those who did and voted it down have not provided any kind of rational rebuttal here in the comment section of why this essay stinks. I find the reasoning I provided to be simple and sound:
0.0) Either we place "intrinsic" value on the concept of democracy and respect (and ultimately adhere to) public opinion in our decision to build and release AGI, OR we don't and make that decision a matter of rational expert opinion, while excluding the general public to some greater or lesser degree in the decision process. This is the question whether we view a democratic decision about AGI as the right thing to do, or just one possible means to our preferred end.
1.0) If we accept radically democratic principles and essentially want to put up AGI for vote, then we have a lot of work to do: We have to reach out to the public, thoroughly inform them in detail about every known aspect of AGI and convince a majority of the worldwide public, that it is a good idea. If they reject it, we would have to postpone the development and/or release, until public opinion sways or an un/friendly AGI gets released without consensus in the meantime.
1.1) Getting consent is not a trivial task by any stretch of my imagination and from what I know about human psychology, I believe it is more rational to assume, that the democratic approach cannot possibly work. If you think otherwise, if you SERIOUSLY think this can be successfully pulled off, then I think the burden of proof is on you here: Why should 4,5 billion people suddenly become champions of rationality? How do you think this radical transformation from an insipid public to a powerhouse of intelligent decision-making will take place? None of you (those who defend the possibility and preference of the democratic approach) have done this yet. The only thing that could convince me here would be that the majority of people, or at least a sizable portion, have powerful brain augmentations by the time AGI is on the brink of completion. That I do not believe, but none of you argued this case so far, nor did someone argue in-depth (including countering my arguments and concerns about) how a democratic approach could possibly succeed without brain augmentation.
2.0) If we reject the desirability of a democratic decision when it comes to AGI (as I do for practical concerns), we automatically approach public opinion from a different angle: Public opinion becomes an instrumental concern, because we admit to ourselves that we would be willing to release AGI whether or not we have public consent. If we go down this path, we must ask ourselves how we manage public opinion in a manner that benefits our cause. How exactly should we engage them - if at all? My "moral" take on this in a sentence: "I'm vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making."
2.1) In this case, the question becomes whether or not informing the public as thoroughly as possible will aid or hinder our ambitions. In case we believe the majority of the public would reject our AGI project, even after we educate them about it (the scenario I predict), the question is obviously whether or not it is beneficial to inform them about it in the first place. I gave my reasons why I think secrecy (at least about some aspects of AGI) would be the better option and I've not yet read any convincing thoughts to the contrary. How could we possibly trust them to make the rational choice once they're informed, and how could we (and they) react, after most people are informed of AGI and actually disapprove ?
2.2) If you're with me on 2.0 and 2.1, then the next problem is who we think should know about it to what extent, who shouldn't, and how this can be practically implemented. This I've not thoroughly thought about myself yet, because I hoped this would be the direction where our discussion would go, but I'm disappointed that most of you seem to argue for 1.0 and 1.1 instead (which would be great if the arguments were good, but to me they seem like cheap applause lights, instead of being even remotely practical in the real world).
(These points are of course not a full breakdown of all possibilities to consider, but I believe they roughly cover most bases)
I also expected to hear some of you make a good case for 1.0 and 1.1, or even call into question 0.0, but most of you guys just pretend "1.0 and 1.1 are possible" without any sound explanation why that would be the case. You just assume it can be done for some reason, but I think you should explain yourself, because this is an extraordinary claim, while my assumption of 4,5 billion people NOT becoming rational superheroes or fanatical geeky AGI followers seems vastly more likely to me.
Considering what I've thought about until now, secrecy (or at the very least not too broad and enthusiastic public outreach, combined with an alternative approach of targeting more specific groups or people to contact) seems to be the preferable option to me. ALSO, I admit that public outreach is most probably fine right now, because people who reject it nowadays usually simply feel like it couldn't be done anyway, and it's so far off that they won't make an effort to oppose us, while people whom we convince are all potential human resources for our cause who are welcome and needed.
So in a nutshell I think the cost/benefit ratio of public outreach is just fine by now, but that we ought to reconsider our approach in due time (perhaps a decade or so from now, depending on the future progress and public perception of AI).