Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Stanford Professor Sam Savage (also of Probability Management) proposes that large firms appoint a "Chief Probability Officer." Here is a description from Douglas Hubbard's How to Measure Anything, ch. 6:
Sam Savage... has some ideas about how to institutionalize the entire process of creating Monte Carlo simulations [for estimating risk].
...His idea is to appoint a chief probability officer (CPO) for the firm. The CPO would be in charge of managing a common library of probability distributions for use by anyone running Monte Carlo simulations. Savage invokes concepts like the Stochastic Information Packet (SIP), a pregenerated set of 100,000 random numbers for a particular value. Sometimes different SIPs would be related. For example, the company’s revenue might be related to national economic growth. A set of SIPs that are generated so they have these correlations are called “SLURPS” (Stochastic Library Units with Relationships Preserved). The CPO would manage SIPs and SLURPs so that users of probability distributions don’t have to reinvent the wheel every time they need to simulate inflation or healthcare costs.
Hubbard adds some of his own ideas to the proposal:
- Certification of analysts. Right now, there is not a lot of quality control for decision analysis experts. Only actuaries, in their particular specialty of decision analysis, have extensive certification requirements. As for actuaries, certification in decision analysis should eventually be an independent not-for-profit program run by a professional association. Some other professional certifications now partly cover these topics but fall far short in substance in this particular area. For this reason, I began certifying individuals in Applied Information Economics because there was an immediate need for people to be able to prove their skills to potential employers.
- Certification for calibrated estimators. As we discussed earlier, an uncalibrated estimator has a strong tendency to be overconfident. Any calculation of risk based on his or her estimates will likely be significantly understated. However, a survey I once conducted showed that calibration is almost unheard of among those who build Monte Carlo models professionally, even though a majority used at least some subjective estimates. (About a third surveyed used mostly subjective estimates.) Calibration training will be one of the simplest improvements to risk analysis in an organization.
- Well-documented procedures and templates for how models are built from the input of various calibrated estimators. It takes some time to smooth out the wrinkles in the process. Most organizations don’t need to start from scratch for every new investment they are analyzing; they can base their work on that of others or at least reuse their own prior models. I’ve executed nearly the same analysis procedure following similar project plans for a wide variety of decision analysis problems from IT security, military logistics, and entertainment industry investments. But when I applied the same method in the same organization on different problems, I often found that certain parts of the model would be similar to parts of earlier models. An insurance company would have several investments that include estimating the impact on “customer retention” and “claims payout ratio.” Manufacturing-related investments would have calculations related to “marginal labor costs per unit” or “average order fulfillment time.” These issues don’t have to be modeled anew for each new investment problem. They are reusable modules in spreadsheets.
- Adoption of a single automated tool set. [In this book I show] a few of the many tool sets available. You can get as sophisticated as you like, but starting out doesn’t require any more than some good spreadsheet-based tools. I recommend starting simple and adopting more extensive tool sets as the situations demand.
The Music Genome Project is what powers Pandora. According to Wikipedia:
The Music Genome Project was first conceived by Will Glaser and Tim Westergren in late 1999. In January 2000, they joined forces with Jon Kraft to found Pandora Media to bring their idea to market. The Music Genome Project was an effort to "capture the essence of music at the fundamental level" using almost 400 attributes to describe songs and a complex mathematical algorithm to organize them. Under the direction of Nolan Gasser, the musical structure and implementation of the Music Genome Project, made up of 5 Genomes (Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical), was advanced and codified.
A given song is represented by a vector (a list of attributes) containing approximately 400 "genes" (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical music, have 300–500 genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 1 and 5, in half-integer increments.
Given the vector of one or more songs, a list of other similar songs is constructed using a distance function. Each song is analyzed by a musician in a process that takes 20 to 30 minutes per song. Ten percent of songs are analyzed by more than one technician to ensure conformity with the in-house standards and statistical reliability. The technology is currently used by Pandora to play music for Internet users based on their preferences. Because of licensing restrictions, Pandora is available only to users whose location is reported to be in the USA by Pandora's geolocation software.
So I was thinking, you could probably do something like that for writing, and then try to craft a written work with elements known to appeal to people. For instance, if you wished to write a best selling detective novel, you might do an analysis of when the antagonist(s) appear in the plot for the first time. You might find that 15% of bestsellers open with the primary antagonist committing their crime, 10% have the antagonist mixed in quickly into the plot, and 75% keep the primary antagonist a vague and shadowy figure until shortly before the climax.
I don’t know if the pattern fits that – I don’t read many detective novels – but it would be a bit of a surprise if it did. You might think, well, hey, I better either introduce the antagonist right away having them commit their crime, or keep him shadowy for a while.
Or, to use an easier example – perhaps you could wholesale adopt the use of engineering checklists into your chosen discipline? It seems to me like lots of fields don’t use checklists that could benefit tremendously from them. I run this through my mind again and again – what kind of checklist could be built here? I first came across the concept of checklists being adopted in surgery from engineering, and then having surgical accidents and mistakes go way down.
Some people at TV Tropes came across that article, and thought that their wiki's database might be a good starting point to make this project a reality. I came here to look for the savvy, intelligence, and level of technical expertise in all things AI and NIT that I've come to expect of this site's user-base, hoping that some of you might be interested in having a look at the discussion, and, perhaps, would feel like joining in, or at least sharing some good advice.
Thank you. (Also, should I make this post "Discussion" or "Top Level"?)
Completely artificial intelligence is hard. But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions. So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous. Architectures that would distribute parts of tasks among different people.
Would you be less afraid of an AI like that? Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?
Because you probably already are part of such an AI. We call them corporations.
Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents. In that way they resemble AI from the 1970s. But they may provide insight into the behavior of AIs. The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have. But despite being very different from humans in this important way, they end up acting similar to us.
Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law). This despite having different physicality and different needs.
It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have. But it should also follow that these ethics are too complex for us to perceive.
By "the industry" in this post, I refer to that part of the entertainment industry which:
1. Produces movies, TV and video games (as opposed to books, comics etc.)
2. Is motivated by profit (as opposed to fun, politics etc.)
3. Consists of companies (as opposed to lone developers, student teams etc.)
It seems to me that the industry has two characteristics:
Most products follow some formula which is known to be workable.
Under what circumstances is this rational? (I'm not commenting on whether it's artistically good or bad; again, I'm only discussing entertainment as a commercial enterprise motivated by profit.) It seems to me following a proven formula is rational if your priority is to not lose, to go for the sure thing, i.e. the chance of a big hit is not worth the risk of a complete flop.
It's the accepted wisdom that entertainment is a hit driven industry: almost all the profits are generated by a handful of the most successful products, with the rest losing money or barely covering costs.
Now my question: isn't there a contradiction here? If you're selling insurance, following a proven formula may well be the rational thing to do. If you're the owner of one of the handful of franchises that is pulling in big profits, of course you shouldn't mess with a winner. But if you're one of the many also-rans, how is it rational to stick with an almost sure loser? In a hit driven industry, wouldn't it be more rational to concentrate on maximizing your chance of winning big, instead of trying to minimize the risk of a flop?
But I've never worked in the entertainment industry; perhaps my layman's impression of it is inaccurate. Is there something I'm missing, or is a substantial amount of expected profit really being left on the table?
I hope this is a good place for this - comments/suggestions welcome - offers of collaboration more than welcome!
I envisage a kind of structured wiki, centred around the creation of propositions, which can be linked to allow communities of interest to rapidly come to fairly sophisticated levels of mutual understanding; the aim being to foster the development of strong groups with confidence in shared, conscious positions. This should allow significant confidence in collaboration.
Some aspects, in no particular order;
- Propositions are made by users, and are editable by users - as in a wiki
- Each proposition could be templated - the inspiration for the template being the form adopted by Chris. Alexander et al in 'A Pattern Language', namely;
- TITLE (referenced)(confidence level)
- context - including links to other propositions within whose sphere this one might operate
- STATEMENT OF PROBLEM/PURPOSE OF PROPOSITION
- CONCLUSION - couched in parametric/generic/process based terms
- links to other propositions for which this proposition is the context
- Some mechanism for users to make public their degree of acceptance of each proposition
- Some mechanism for construction by individuals/groups of networks of propositions specific to particular users/groups (in other words, the links referred to in 3. and 7. above might be different for different users/groups) These networks can work like Pattern Languages that address particular fields / ethical approaches / political or philosophical positions / projects
- Some mechanism for assignment by users/groups of tiered structure to proposition networks (to allow for distinctions to be made between fundamental, large scale propositions and more detailed, peripheral ones)
- Some mechanism for individual users to form associations with other users/established groups who are subscribing to the same propositions
- Some mechanism for community voting/karma to promote individuals to assume stewardship of groups
Enough of these for now. Some imagined interactions might be more helpful;
- I stumble across the site (as I stumbled across LessWrong), and browse proposition titles. I come across one called 'Other people are real, just like me'. It contains some version of the argument for accepting that other humans are to be assumed to have roughly the same motivations, needs et al, as me, and the suggestion that this is a useful founding block for a rational morality. I decide to subscribe, fairly strongly. I am offered a tailored selection of related propositions, as identified by the groups that have included this proposition in their networks (without identification of said groups, I rather think) - I investigate these, and at some point, the system feels that my developing profile is beginning to match that of some group or groups - and offers me the chance to look at their 'mission statement' pages. I decide to come back another day and look at other propositions included in these groups' networks, before going any further. I decline to have my profile made public, so that the groups don't contact me.
- I come across some half-baked, but interesting proposition. As a registered user, but not the originator of the proposition, I have some choices; I can comment on the proposition, hoping to engage in dialogue with the proposer that could be fruitful, or I can 'clone' (or 'fork') the proposition, and seek to improve it myself. Ultimately, the interest of other users will determine the influence and relevance of the proposition.
I am a fundamentalist christian (!). I come across the site, and am appalled at its secular, materialist tone. I make a new proposition; 'The Bible is revealed truth, in all its glory' (or some such twaddle. Of course, I omit to specify which edition, and don't even consider the option of a language other than english - but hey, what do you expect?). Within days, I have assembled a wonderful active group of woolly minded people happily discussing the capacity of Noah's Ark, or whatever. The point here is that the platform is just that - a platform. Human community is a Good Thing.
- I am pushed upward by the group I am part of to some sort of moderator role. The system shows various other groups who agree more or less strongly with most of the propositions our group deems fundamental. I contact my opposite number in one of those, and we together make a new proposition which we believe could be a vehicle for discussions that could lead to a merger.
- I wish to write a business plan that is not a pile of dead tree gathering dust 6 weeks after it was presented to the board. I attempt to set out the aims of the business as fundamental propositions, and advertise this network to my colleagues, who suggest refinements. On this basis, we work up a description of the important policies and 'business rules' which define the enterprise. These remain accessible and editable , so that they can evolve along with the business.
- I am considering an open-source project. I set out the fundamental aims and characteristics of the tool I am proposing, and link them together. The system allows me to set myself up as a group. I sit back and wait for others to comment. Based on these comments, the propositions are refined, others added, relationships built with potential collaborators. At some point, we form a group, and the project gets under way. Throughout its life, the propositions are continually refined and added to. The propositions are a useful form of marketing, and save us a great deal of bother talking to people who want to know what/why/how.
Enough... Point 6 is almost recursive.......
There is more discursive (and older) material, here.
Thanks for reading, and please do comment.