You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Notes on the Safety in Artificial Intelligence conference

25 UmamiSalami 01 July 2016 12:36AM

These are my notes and observations after attending the Safety in Artificial Intelligence (SafArtInt) conference, which was co-hosted by the White House Office of Science and Technology Policy and Carnegie Mellon University on June 27 and 28. This isn't an organized summary of the content of the conference; rather, it's a selection of points which are relevant to the control problem. As a result, it suffers from selection bias: it looks like superintelligence and control-problem-relevant issues were discussed frequently, when in reality those issues were discussed less and I didn't write much about the more mundane parts.

SafArtInt has been the third out of a planned series of four conferences. The purpose of the conference series was twofold: the OSTP wanted to get other parts of the government moving on AI issues, and they also wanted to inform public opinion.

The other three conferences are about near term legal, social, and economic issues of AI. SafArtInt was about near term safety and reliability in AI systems. It was effectively the brainchild of Dr. Ed Felten, the deputy U.S. chief technology officer for the White House, who came up with the idea for it last year. CMU is a top computer science university and many of their own researchers attended, as well as some students. There were also researchers from other universities, some people from private sector AI including both Silicon Valley and government contracting, government researchers and policymakers from groups such as DARPA and NASA, a few people from the military/DoD, and a few control problem researchers. As far as I could tell, everyone except a few university researchers were from the U.S., although I did not meet many people. There were about 70-100 people watching the presentations at any given time, and I had conversations with about twelve of the people who were not affiliated with existential risk organizations, as well as of course all of those who were affiliated. The conference was split with a few presentations on the 27th and the majority of presentations on the 28th. Not everyone was there for both days.

Felten believes that neither "robot apocalypses" nor "mass unemployment" are likely. It soon became apparent that the majority of others present at the conference felt the same way with regard to superintelligence. The general intention among researchers and policymakers at the conference could be summarized as follows: we need to make sure that the AI systems we develop in the near future will not be responsible for any accidents, because if accidents do happen then they will spark public fears about AI, which would lead to a dearth of funding for AI research and an inability to realize the corresponding social and economic benefits. Of course, that doesn't change the fact that they strongly care about safety in its own right and have significant pragmatic needs for robust and reliable AI systems.

Most of the talks were about verification and reliability in modern day AI systems. So they were concerned with AI systems that would give poor results or be unreliable in the narrow domains where they are being applied in the near future. They mostly focused on "safety-critical" systems, where failure of an AI program would result in serious negative consequences: automated vehicles were a common topic of interest, as well as the use of AI in healthcare systems. A recurring theme was that we have to be more rigorous in demonstrating safety and do actual hazard analyses on AI systems, and another was that we need the AI safety field to succeed in ways that the cybersecurity field has failed. Another general belief was that long term AI safety, such as concerns about the ability of humans to control AIs, was not a serious issue.

On average, the presentations were moderately technical. They were mostly focused on machine learning systems, although there was significant discussion of cybersecurity techniques.

The first talk was given by Eric Horvitz of Microsoft. He discussed some approaches for pushing into new directions in AI safety. Instead of merely trying to reduce the errors spotted according to one model, we should look out for "unknown unknowns" by stacking models and looking at problems which appear on any of them, a theme which would be presented by other researchers as well in later presentations. He discussed optimization under uncertain parameters, sensitivity analysis to uncertain parameters, and 'wireheading' or short-circuiting of reinforcement learning systems (which he believes can be guarded against by using 'reflective analysis'). Finally, he brought up the concerns about superintelligence, which sparked amused reactions in the audience. He said that scientists should address concerns about superintelligence, which he aptly described as the 'elephant in the room', noting that it was the reason that some people were at the conference. He said that scientists will have to engage with public concerns, while also noting that there were experts who were worried about superintelligence and that there would have to be engagement with the experts' concerns. He did not comment on whether he believed that these concerns were reasonable or not.

An issue which came up in the Q&A afterwards was that we need to deal with mis-structured utility functions in AI, because it is often the case that the specific tradeoffs and utilities which humans claim to value often lead to results which the humans don't like. So we need to have structural uncertainty about our utility models. The difficulty of finding good objective functions for AIs would eventually be discussed in many other presentations as well.

The next talk was given by Andrew Moore of Carnegie Mellon University, who claimed that his talk represented the consensus of computer scientists at the school. He claimed that the stakes of AI safety were very high - namely, that AI has the capability to save many people's lives in the near future, but if there are any accidents involving AI then public fears could lead to freezes in AI research and development. He highlighted the public's irrational tendencies wherein a single accident could cause people to overlook and ignore hundreds of invisible lives saved. He specifically mentioned a 12-24 month timeframe for these issues.

Moore said that verification of AI system safety will be difficult due to the combinatorial explosion of AI behaviors. He talked about meta-machine-learning as a solution to this, something which is being investigated under the direction of Lawrence Schuette at the Office of Naval Research. Moore also said that military AI systems require high verification standards and that development timelines for these systems are long. He talked about two different approaches to AI safety, stochastic testing and theorem proving - the process of doing the latter often leads to the discovery of unsafe edge cases.

He also discussed AI ethics, giving an example 'trolley problem' where AI cars would have to choose whether to hit a deer in order to provide a slightly higher probability of survival for the human driver. He said that we would need hash-defined constants to tell vehicle AIs how many deer a human is worth. He also said that we would need to find compromises in death-pleasantry tradeoffs, for instance where the safety of self-driving cars depends on the speed and routes on which they are driven. He compared the issue to civil engineering where engineers have to operate with an assumption about how much money they would spend to save a human life.

He concluded by saying that we need policymakers, company executives, scientists, and startups to all be involved in AI safety. He said that the research community stands to gain or lose together, and that there is a shared responsibility among researchers and developers to avoid triggering another AI winter through unsafe AI designs.

The next presentation was by Richard Mallah of the Future of Life Institute, who was there to represent "Medium Term AI Safety". He pointed out the explicit/implicit distinction between different modeling techniques in AI systems, as well as the explicit/implicit distinction between different AI actuation techniques. He talked about the difficulty of value specification and the concept of instrumental subgoals as an important issue in the case of complex AIs which are beyond human understanding. He said that even a slight misalignment of AI values with regard to human values along one parameter could lead to a strongly negative outcome, because machine learning parameters don't strictly correspond to the things that humans care about.

Mallah stated that open-world discovery leads to self-discovery, which can lead to reward hacking or a loss of control. He underscored the importance of causal accounting, which is distinguishing causation from correlation in AI systems. He said that we should extend machine learning verification to self-modification. Finally, he talked about introducing non-self-centered ontology to AI systems and bounding their behavior.

The audience was generally quiet and respectful during Richard's talk. I sensed that at least a few of them labelled him as part of the 'superintelligence out-group' and dismissed him accordingly, but I did not learn what most people's thoughts or reactions were. In the next panel featuring three speakers, he wasn't the recipient of any questions regarding his presentation or ideas.

Tom Mitchell from CMU gave the next talk. He talked about both making AI systems safer, and using AI to make other systems safer. He said that risks to humanity from other kinds of issues besides AI were the "big deals of 2016" and that we should make sure that the potential of AIs to solve these problems is realized. He wanted to focus on the detection and remediation of all failures in AI systems. He said that it is a novel issue that learning systems defy standard pre-testing ("as Richard mentioned") and also brought up the purposeful use of AI for dangerous things.

Some interesting points were raised in the panel. Andrew did not have a direct response to the implications of AI ethics being determined by the predominantly white people of the US/UK where most AIs are being developed. He said that ethics in AIs will have to be decided by society, regulators, manufacturers, and human rights organizations in conjunction. He also said that our cost functions for AIs will have to get more and more complicated as AIs get better, and he said that he wants to separate unintended failures from superintelligence type scenarios. On trolley problems in self driving cars and similar issues, he said "it's got to be complicated and messy."

Dario Amodei of Google Deepbrain, who co-authored the paper on concrete problems in AI safety, gave the next talk. He said that the public focus is too much on AGI/ASI and wants more focus on concrete/empirical approaches. He discussed the same problems that pose issues in advanced general AI, including flawed objective functions and reward hacking. He said that he sees long term concerns about AGI/ASI as "extreme versions of accident risk" and that he thinks it's too early to work directly on them, but he believes that if you want to deal with them then the best way to do it is to start with safety in current systems. Mostly he summarized the Google paper in his talk.

In her presentation, Claire Le Goues of CMU said "before we talk about Skynet we should focus on problems that we already have." She mostly talked about analogies between software bugs and AI safety, the similarities and differences between the two and what we can learn from software debugging to help with AI safety.

Robert Rahmer of IARPA discussed CAUSE, a cyberintelligence forecasting program which promises to help predict cyber attacks. It is a program which is still being put together.

In the panel of the above three, autonomous weapons were discussed, but no clear policy stances were presented.

John Launchbury gave a talk on DARPA research and the big picture of AI development. He pointed out that DARPA work leads to commercial applications and that progress in AI comes from sustained government investment. He classified AI capabilities into "describing," "predicting," and "explaining" in order of increasing difficulty, and he pointed out that old fashioned "describing" still plays a large role in AI verification. He said that "explaining" AIs would need transparent decisionmaking and probabilistic programming (the latter would also be discussed by others at the conference).

The next talk came from Jason Gaverick Matheny, the director of IARPA. Matheny talked about four requirements in current and future AI systems: verification, validation, security, and control. He wanted "auditability" in AI systems as a weaker form of explainability. He talked about the importance of "corner cases" for national intelligence purposes, the low probability, high stakes situations where we have limited data - these are situations where we have significant need for analysis but where the traditional machine learning approach doesn't work because of its overwhelming focus on data. Another aspect of national defense is that it has a slower decision tempo, longer timelines, and longer-viewing optics about future events.

He said that assessing local progress in machine learning development would be important for global security and that we therefore need benchmarks to measure progress in AIs. He ended with a concrete invitation for research proposals from anyone (educated or not), for both large scale research and for smaller studies ("seedlings") that could take us "from disbelief to doubt".

The difference in timescales between different groups was something I noticed later on, after hearing someone from the DoD describe their agency as having a longer timeframe than the Homeland Security Agency, and someone from the White House describe their work as being crisis reactionary.

The next presentation was from Andrew Grotto, senior director of cybersecurity policy at the National Security Council. He drew a close parallel from the issue of genetically modified crops in Europe in the 1990's to modern day artificial intelligence. He pointed out that Europe utterly failed to achieve widespread cultivation of GMO crops as a result of public backlash. He said that the widespread economic and health benefits of GMO crops were ignored by the public, who instead focused on a few health incidents which undermined trust in the government and crop producers. He had three key points: that risk frameworks matter, that you should never assume that the benefits of new technology will be widely perceived by the public, and that we're all in this together with regard to funding, research progress and public perception.

In the Q&A between Launchbury, Matheny, and Grotto after Grotto's presentation, it was mentioned that the economic interests of farmers worried about displacement also played a role in populist rejection of GMOs, and that a similar dynamic could play out with regard to automation causing structural unemployment. Grotto was also asked what to do about bad publicity which seeks to sink progress in order to avoid risks. He said that meetings like SafArtInt and open public dialogue were good.

One person asked what Launchbury wanted to do about AI arms races with multiple countries trying to "get there" and whether he thinks we should go "slow and secure" or "fast and risky" in AI development, a question which provoked laughter in the audience. He said we should go "fast and secure" and wasn't concerned. He said that secure designs for the Internet once existed, but the one which took off was the one which was open and flexible.

Another person asked how we could avoid discounting outliers in our models, referencing Matheny's point that we need to include corner cases. Matheny affirmed that data quality is a limiting factor to many of our machine learning capabilities. At IARPA, we generally try to include outliers until they are sure that they are erroneous, said Matheny.

Another presentation came from Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence. He said that we have not focused enough on safety, reliability and robustness in AI and that this must change. Much like Eric Horvitz, he drew a distinction between robustness against errors within the scope of a model and robustness against unmodeled phenomena. On the latter issue, he talked about solutions such as expanding the scope of models, employing multiple parallel models, and doing creative searches for flaws - the latter doesn't enable verification that a system is safe, but it nevertheless helps discover many potential problems. He talked about knowledge-level redundancy as a method of avoiding misspecification - for instance, systems could identify objects by an "ownership facet" as well as by a "goal facet" to produce a combined concept with less likelihood of overlooking key features. He said that this would require wider experiences and more data.

There were many other speakers who brought up a similar set of issues: the user of cybersecurity techniques to verify machine learning systems, the failures of cybersecurity as a field, opportunities for probabilistic programming, and the need for better success in AI verification. Inverse reinforcement learning was extensively discussed as a way of assigning values. Jeanette Wing of Microsoft talked about the need for AIs to reason about the continuous and the discrete in parallel, as well as the need for them to reason about uncertainty (with potential meta levels all the way up). One point which was made by Sarah Loos of Google was that proving the safety of an AI system can be computationally very expensive, especially given the combinatorial explosion of AI behaviors.

In one of the panels, the idea of government actions to ensure AI safety was discussed. No one was willing to say that the government should regulate AI designs. Instead they stated that the government should be involved in softer ways, such as guiding and working with AI developers, and setting standards for certification.

Pictures: https://imgur.com/a/49eb7

In between these presentations I had time to speak to individuals and listen in on various conversations. A high ranking person from the Department of Defense stated that the real benefit of autonomous systems would be in terms of logistical systems rather than weaponized applications. A government AI contractor drew the connection between Mallah's presentation and the recent press revolving around superintelligence, and said he was glad that the government wasn't worried about it.

I talked to some insiders about the status of organizations such as MIRI, and found that the current crop of AI safety groups could use additional donations to become more established and expand their programs. There may be some issues with the organizations being sidelined; after all, the Google Deepbrain paper was essentially similar to a lot of work by MIRI, just expressed in somewhat different language, and was more widely received in mainstream AI circles.

In terms of careers, I found that there is significant opportunity for a wide range of people to contribute to improving government policy on this issue. Working at a group such as the Office of Science and Technology Policy does not necessarily require advanced technical education, as you can just as easily enter straight out of a liberal arts undergraduate program and build a successful career as long as you are technically literate. (At the same time, the level of skepticism about long term AI safety at the conference hinted to me that the signalling value of a PhD in computer science would be significant.) In addition, there are large government budgets in the seven or eight figure range available for qualifying research projects. I've come to believe that it would not be difficult to find or create AI research programs that are relevant to long term AI safety while also being practical and likely to be funded by skeptical policymakers and officials.

I also realized that there is a significant need for people who are interested in long term AI safety to have basic social and business skills. Since there is so much need for persuasion and compromise in government policy, there is a lot of value to be had in being communicative, engaging, approachable, appealing, socially savvy, and well-dressed. This is not to say that everyone involved in long term AI safety is missing those skills, of course.

I was surprised by the refusal of almost everyone at the conference to take long term AI safety seriously, as I had previously held the belief that it was more of a mixed debate given the existence of expert computer scientists who were involved in the issue. I sensed that the recent wave of popular press and public interest in dangerous AI has made researchers and policymakers substantially less likely to take the issue seriously. None of them seemed to be familiar with actual arguments or research on the control problem, so their opinions didn't significantly change my outlook on the technical issues. I strongly suspect that the majority of them had their first or possibly only exposure to the idea of the control problem after seeing badly written op-eds and news editorials featuring comments from the likes of Elon Musk and Stephen Hawking, which would naturally make them strongly predisposed to not take the issue seriously. In the run-up to the conference, websites and press releases didn't say anything about whether this conference would be about long or short term AI safety, and they didn't make any reference to the idea of superintelligence.

I sympathize with the concerns and strategy given by people such as Andrew Moore and Andrew Grotto, which make perfect sense if (and only if) you assume that worries about long term AI safety are completely unfounded. For the community that is interested in long term AI safety, I would recommend that we avoid competitive dynamics by (a) demonstrating that we are equally strong opponents of bad press, inaccurate news, and irrational public opinion which promotes generic uninformed fears over AI, (b) explaining that we are not interested in removing funding for AI research (even if you think that slowing down AI development is a good thing, restricting funding yields only limited benefits in terms of changing overall timelines, whereas those who are not concerned about long term AI safety would see a restriction of funding as a direct threat to their interests and projects, so it makes sense to cooperate here in exchange for other concessions), and (c) showing that we are scientifically literate and focused on the technical concerns. I do not believe that there is necessarily a need for the two "sides" on this to be competing against each other, so it was disappointing to see an implication of opposition at the conference.

Anyway, Ed Felten announced a request for information from the general public, seeking popular and scientific input on the government's policies and attitudes towards AI: https://www.whitehouse.gov/webform/rfi-preparing-future-artificial-intelligence

Overall, I learned quite a bit and benefited from the experience, and I hope the insight I've gained can be used to improve the attitudes and approaches of the long term AI safety community.

[Link] White House announces a series of workshops on AI, expresses interest in safety

11 AspiringRationalist 04 May 2016 02:50AM

Easy wins aren't news

39 PhilGoetz 19 February 2015 07:38PM

Recently I talked with a guy from Grant Street Group. They make, among other things, software with which local governments can auction their bonds on the Internet.

By making the auction process more transparent and easier to participate in, they enable local governments which need to sell bonds (to build a high school, for instance), to sell those bonds at, say, 7% interest instead of 8%. (At least, that's what he said.)

They have similar software for auctioning liens on property taxes, which also helps local governments raise more money by bringing more buyers to each auction, and probably helps the buyers reduce their risks by giving them more information.

This is a big deal. I think it's potentially more important than any budget argument that's been on the front pages since the 1960s. Yet I only heard of it by chance.

People would rather argue about reducing the budget by eliminating waste, or cutting subsidies to people who don't deserve it, or changing our ideological priorities. Nobody wants to talk about auction mechanics. But fixing the auction mechanics is the easy win. It's so easy that nobody's interested in it. It doesn't buy us fuzzies or let us signal our affiliations. To an individual activist, it's hardly worth doing.

Sortition - Hacking Government To Avoid Cognitive Biases And Corruption

0 Aussiekas 06 May 2014 06:10AM

I've elaborated on this form of government I have proposed in great detail on my blog here

 


The purpose of this post is to be a persuasive argument for my proposed system of democracy.  I am arguing along the lines that my legislature by sortition, random selection, is superior to electoral systems.  It also mirrors the advances in overcoming bias which are currently being pioneered in the Sciences.

I. The Problem

It is insane that we allow the same people who are elected to cast their eye on society to identify problems, write up the solutions to those problems, and then also vote to approve those solutions.  This triple function of government by elected officials isn't simply corruptible, but is inherently flawed in its decision making process.



II. The Central Committee, overcoming bias, electoral shenanigans, and demographics bias

In my system of sortition election there is a mini-referendum done by a huge sampling of 1,000-5,000 representatives at the highest level.  They vote everything up or down and cannot change anything about a bill themselves.  They are not congregated into one place and there is no politics between them.  They don't even need to know, nor could they know each other.  Perhaps they could be part of political parties, but there is no need or money behind this as the members of what I'm calling the Central Committee (C2) are never candidates and can individually never serve more than once per lifetime or perhaps per decade in 3 year terms.

Contentious issues can be moved to a general referendum.  In the 1,000 member C2, any law in the margins of 550-450 can have a special second vote proposed by the disagreeing side such that if more than 600 agree then the item is added to the general monthly or quarterly referendum conducted electronically with the entire population.  In this way the average person participates and feels heard by their government on a regular basis.

The major advantage of this C2 is that it is representative. It will have people from all areas, be 50% male and 50% female and will include all minorities.  There can be no great misrepresentation or capture of the legislature by a powerful group.  This overcome many of the inherent biases of an electoral system which in almost every democracy today routinely under represents minorities.

III. The Issue Committees (IC)

The IC is a totally separate body whose sole job is to identify areas of the law which need updating.  They are comprised of 100 citizens and are a split between 51 Regular Citizens (RCs) and 49 Expert Citizens (EC) serving single 3 year terms.  There are around 30 ICs and they each serve an area such as defence, environment, food safety, drug safety, telecommunications, changes to government, finance sector, banking sector, etc.

These committees will meet in person and discuss what needs exist which the government can address.  They do not get to write any laws, nor do they get to vote on any laws.  There are in fact more of these than there are members of the C2 and they will be the primary face of government where the average citizen can send in requests or communicate needs.  The IC shines a spotlight on the issues facing the country.  They also form the law writing bodies

IV. The Sub Committee (SC)

These are temporary parts of the legislature who write the laws.  They have no authority over what topic area they get to write laws about, that is determined by the IC and then voted upon by the C2.  They are composed of 10 RCs and 10 ECs with the support of 10 Lawyer Citizens (LC). The LCs do not participate to vote when the draft law can be moved up to the C2 for consideration, they simply help draft reasonable laws.

These SC's form and dissolved quickly, lasting no more than 3-6 months before a proposed law is made.  Being called up to the SC is a lot more akin to being drafted for Jury Duty than the IC or C2 level of government as it is a short term of service.

V. Conclusions

  • This system is indeed more democratic and more representative than current electoral democracies.  It is less prone to corruption and electioneering is impossible as there are no elections. 


  • Members of the C2, IC, and SC parts of intentionally split in their duties so no conflict of interest can arise and there is no legislator bias where they have pet bills and issues to push through for benefits to specific parts of the country.

  • This system is also less influenced by the views an opinions of the very wealthy and the demographic and economic makeup of the people involved.

And that's it.  Could it work?  Would it work?  I'd like to think it has some advantages over the current and outdated mechanisms of democracy in terms of new knowledge about how the human mind works.

EDIT:  moved notes to bottom of post

NOTE 1:  I anticipate this objection.  Random Citizens (RC) and Expert Citizens (EC) have various stipulations on their service and on how often they can serve, check out my linked post at the top for  details.  Suffice to say, the RCs must have completed high school and cannot be intellectually disabled.  Whatever you can think of that might disqualify someone for a jury, think of something along those lines.

NOTE 2: As for the nature of this being different, look at juries.  We already use a process of sortition, though heavily and perhaps unfairly constrained in its current form, to determine if people are guilty or innocent and what sort of punishment they might receive.  We even use sortition in committees of experts in various forms form peer reviewed journals with somewhat random selection from a pool of qualified individuals or ECs in my system.

NOTE 3:  This is not about politics.  I often say I am interested in government, but not politics.  This confuses a lot of people.  If anything, this system would lessen or (too optimistically) eliminate politics.  I know there is a general ban on discussion of politics and this is not that.  I am trying to modify government and democratic systems to reflect advances in cognitive bias, decision theory, and computer technology to modernize and further democratize the practice of government.

What did governments get right? Gotta list them all!

6 Stuart_Armstrong 18 September 2013 12:59PM

When predicting future threats, we also need to predict future policy responses. If mass pandemics are inevitable, it matters whether governments and international organisations can rise to the challenge or not. But its very hard to get a valid intuitive picture of government competence. Consider the following two scenarios:

  • Governments are morasses of incompetence, saturated by turf wars, perverse incentives, inefficiencies, regulatory capture, and excessive risk aversion. The media reports a lot of the bad stuff, but doesn't have nearly enough space for it all, as it has to find some room for sport and naked celebrities. The average person will hear 1 story of government incompetence a day, anyone following the news will hear 10, a dedicated obsessive will hear 100 - but this is just the tip of the iceberg. The media sometimes reports good news to counterbalance the bad, at about a rate of 1-to-10 of good news to bad. This rate is wildly over-optimistic.
  • Governments are filled mainly by politicians desperate to make a positive mark on the world. Civil servants are professional and certainly not stupid, working to clear criteria with a good internal culture, in systems that have learnt the lessons of the past and have improved. There is a certain amount of error, inefficiency, and corruption, but these are more exceptions than rules. Highly politicised issues tend to be badly handled, but less contentious issues are dealt with well. The media, knowing that bad news sells, fills their pages mainly with bad stuff (though they often have to exaggerate issues). The average person will hear 1 story of government incompetence a day, anyone following the news will hear 10, a dedicated obsessive will hear 100 - but some of those are quite distorted. The media sometimes reports good news to counterbalance the bad, at about a rate of 1-to-10 of good news to bad. This rate is wildly over-pessimistic.

These two situations are, of course, completely indistinguishable for the public. The smartest and most dedicated of outside observers can't form an accurate picture of the situation. Which means that, unless you have spent your entire life inside various levels of government (which brings its own distortions!), you don't really have a clue at general government competence. There's some very faint clues that governments may be working better than we generally think: looking at the achievements of past governments certainly seems to hint at a higher rate of success than the reported numbers today. And simply thinking about the amount of things that don't go wrong in a city, every day, hints that someone is doing their job. But these clues are extremely weak.

At this point, one should look up political scientists and other researchers. I hope to be doing that at some point (or the FHI may hire someone to do that). In the meantime, I just wanted to collect a few stories of government success to counterbalance the general media atmosphere. The purpose is not just to train my intuition away from the "governments are intrinsically incompetent" that I currently have (and which is unjustified by objective evidence). It's also the start of a project to get a better picture of where governments fail and where they succeed - which would be much more accurate and much more useful than an abstract "government competence level" intuition. And would be needed if we try and predict policy responses to specific future threats.

So I'm asking if commentators want to share government success stories they may have come across. Especially unusual or unsuspected stories. Vaccinations, clean-air acts, and legally establishing limited liability companies are very well known success stories, for instance, but are there more obscure examples that hint an unexpected diligence in surprising areas?

Politics Discussion Thread February 2013

1 OrphanWilde 06 February 2013 09:33PM

 

  1. Top-level comments should introduce arguments; responses should be responses to those arguments. 
  2. Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both. 
  3. A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
  4. In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.

As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.

 

[Link] The Worst-Run Big City in the U.S.

28 [deleted] 02 December 2012 12:50PM

The Worst-Run Big City in the U.S.

A six page article that reads as a very interesting autopsy of what institutional dysfunction in the intersection of government and non-profits looks like. I recommend reading the whole thing.

Minus the alleged harassment, city government is filled with Yomi Agunbiades — and they're hardly ever disciplined, let alone fired. When asked, former Board of Supervisors President Aaron Peskin couldn't remember the last time a higher-up in city government was removed for incompetence. "There must have been somebody," he said at last, vainly searching for a name.

Accordingly, millions of taxpayer dollars are wasted on good ideas that fail for stupid reasons, and stupid ideas that fail for good reasons, and hardly anyone is taken to task.

The intrusion of politics into government pushes the city to enter long-term labor contracts it obviously can't afford, and no one is held accountable. A belief that good intentions matter more than results leads to inordinate amounts of government responsibility being shunted to nonprofits whose only documented achievement is to lobby the city for money. Meanwhile, piles of reports on how to remedy these problems go unread. There's no outrage, and nobody is disciplined, so things don't get fixed.

You don't say?

In 2007, the Department of Children, Youth, and Families (DCYF) held a seminar for the nonprofits vying for a piece of $78 million in funding. Grant seekers were told that in the next funding cycle, they would be required — for the first time — to provide quantifiable proof their programs were accomplishing something.

The room exploded with outrage. This wasn't fair. "What if we can bring in a family we've helped?" one nonprofit asked. Another offered: "We can tell you stories about the good work we do!" Not every organization is capable of demonstrating results, a nonprofit CEO complained. He suggested the city's funding process should actually penalize nonprofits able to measure results, so as to put everyone on an even footing. Heads nodded: This was a popular idea.

Reading this I had to bite my hand in frustration.

There are two lessons here. First, many San Francisco nonprofits believe they're entitled to money without having to prove that their programs work. Second, until 2007, the city agreed. Actually, most of the city still agrees. DCYF is the only city department that even attempts to track results. It's the model other departments are told to aspire to.

But Maria Su, DCYF's director, admitted that accountability is something her department still struggles with. It can track "output" — what a nonprofit does, how often, and with how many people — but it can't track "outcomes." It can't demonstrate that these outputs — the very things it pays nonprofits to do — are actually helping anyone.

"Believe me, there is still hostility to the idea that outcomes should be tracked," Su says. "I think we absolutely need to be able to provide that level of information. But it's still a work in progress." In the meantime, the city is spending about $500 million a year on programs that might or might not work.

What the efficient charity movement has done so far looks much more impressive in light of this. Reading the rest of the article I think you can on your own identify the problems caused by lost purposes, applause lights and a dozen or so other faults we've explored here for years.

Discussions here are in many respects a comforting illusion, this is what humanity is like out there in the real world, almost at its best, well educated, wealthy and interested in the public good.

Yes it really is that bad.

Book reviews

3 PhilGoetz 14 April 2011 01:50PM

I'd like to see book reviews of books of interest to LW.  Some suggestions:

  • Dan Ariely (2010).  The Upside of Irrationality: The unexpected benefits of defying logic at work and at home.
  • Sam Harris (2010).  The Moral Landscape: How science can determine human values.
  • Dan Ariely (2009).  Predictably Irrational: The Hidden Forces That Shape Our Decisions.
  • Timothy Harris (2010).  The Science of Liberty: Democracy, Reason, and the Laws of Nature.
  • Joel Garreau (2005).  Radical Evolution.  Book about genetic mods, intelligence enhancement, and the singularity.

ADDED:  I don't mean I'd like to see reviews in this thread.  I'd like each review to have its own thread.  In discussion or on the "new" page is up to you.

Public international law

-13 Kevin 10 November 2010 10:10AM