What Would You Like To Read? A Quick Poll

0 alyssavance 21 June 2012 12:38AM

In our discussion of academic papers, Lukeprog argued that lots of smart people preferred to read ideas in academic paper format. Based on my observations, I mostly disagree. But that's just anecdotal evidence. Let's use Science!

Suppose someone at the Singularity Institute thought up a cool new idea: it could be about rationality, Friendly AI, decision theory, making money, or any of the other topics we discuss here on LW. Explaining it takes about ten pages, and it's nontechnical enough that it can be explained to a general audience of non-mathematicians. Which of the following explanations would you be most likely to actually sit down and read through?

  • A post on Less Wrong or another friendly blog
  • A book chapter, available both on Kindle and in physical book form
  • A mailing list post, made available through a public archive
  • An academic paper, downloadable over the Internet as a PDF
  • A static HTML page on the Singularity Institute's website
  • A page on a Singularity Institute or Less Wrong wiki
  • A speech, downloadable as an audio file
  • A PowerPoint or other presentation format

EDIT: To state the obvious, this poll will be biased in favor of blog postings, since it's on a blog. However, I still think it'll provide data that's much better than anecdotal guessing. I've emailed a few rationalist mailing lists to try and counteract this effect.

Comment author: lukeprog 20 June 2012 07:49:08PM *  9 points [-]

what do you view as the purpose of SIAI's publishing papers?

  1. Grab the interest of smart people who won't be grabbed by cheaper methods. This has worked before. Also: Many smart and productive people are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read. In addition, many smart people prefer to read papers over blog posts because papers are generally better organized, are more clearly written, helpfully cite related work, etc.

  2. Reduce communication overhead. We don't have time to have a personal conversation with every interested smart person, and blog posts are often too disorganized and ambiguous to help. Though for this, a scholarly AI risk wiki would probably be even better. Luckily, as I say in that post, there isn't much additional cost involved in turning parts of papers into wiki articles, or combining wiki articles into papers.

  3. Grab some prestige and credibility, because this matters to lots of the people we care about.

  4. Show that we're capable of doing serious research. "Eliezer did some work with Marcello that we can never tell you about" and "We wrote some blog posts this month" don't quite show to most people that we can do research.

  5. Be kinda-forced into writing more clearly, and in a way that is more thoroughly connected to the relevant empirical literatures, than we might otherwise be tempted to write.

Why not just publish them directly on the site, in (probably) a much more readable format?

As I said before, many people find papers more readable than ambiguous blog posts barely connected to the relevant literatures. Eliezer's papers aren't written in a different style than his blog posts, anyway. Also, peer review often improves the final product.

The problem with conformity in academia... is that a) it restricts the sorts of things you can say, b) restricts you, in many cases, to an awkward way of wording things, and c) it makes academia a less fertile ground for recruiting people.

Agree with (a) and somewhat with (b), but we're only writing certain things in paper form. Like I said, the vast majority of FAI work and discussion happens outside papers. I don't know what you mean by (c).

it seems... that we aren't going to have that much prestige in academia anyway, given that the main prestige mechanism is elite university affiliations, and most of us don't have those.

I don't care about something like "average prestige in academia." What I care about is some particular people thinking we have enough credibility to bother reading and engaging with. Many of the people I care about won't bother to check whether the author of an article has elite university affiliation, but will care if we bothered to write up our ideas clearly and with references to related work. The Singularity and Machine Ethics looks much less crankish than Creating Friendly AI, even though none of the authors have elite university affiliation.

Which people have come through Eliezer and Bostrom's papers?

Still gathering data, and I haven't gathered permission to share it. I think two people who wouldn't mind you knowing they came to x-risk through "Astronomical Waste" are Nick Beckstead and Jason Gaverick Matheny.

Using my own personal experiences is... very far from generalizing from a single example

Point taken.

How hard would it be to get someone to read a paper, vs. a single Sequence post of equal length, or a bunch of Sequence posts that sum to an equal length?

My intended point was that sometimes a paper has summed up the main points from something that Eliezer took 30 blog posts to write when he wrote The Sequences. But obviously you don't have to write a paper to do this, so I drop the point.

If all new areas of research are developed through in-person conversations and mailing lists, that doesn't imply that papers are a good way to do FAI research; it implies that papers are a bad way to do all those other kinds of research.

Remember: almost all FAI research is not done via papers. In my above list of reasons why SI publishes papers, I didn't even think to mention "to produce original research" (and I won't go back and add it now), though that sometimes happens.

there are some instances of academic moderation being net good rather than net bad. However, to quote of your earlier arguments, "don't generalize from one example". I'm sure that there are some well-moderated journals, just as I'm sure there are Mafia bosses who are really nice helpful guys. However, that doesn't imply that hanging out with Mafia bosses is a good idea.

If one journal is poorly moderated, then you jump to another one. Unlike Mafia bosses, a "problem" with journal moderators means "I wasted a few hours communicating with them and making revisions," not "They decided to cut off my thumbs."

Comment author: alyssavance 20 June 2012 08:03:00PM *  3 points [-]

Re-replying:

  • For people who "are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read", which specific examples are you thinking of, and how much any of them become nontrivial members of our community, or helped us out in nontrivial ways?

  • I'm sure there are people who a) are very smart, b) look impressive on paper, who c) we've contacted about FAI research, and d) have said "I'm not going to pay attention, since this isn't peer reviewed" (or some equivalent). However, I think that for most of those people, that isn't their true rejection (http://lesswrong.com/lw/wj/is_that_your_true_rejection/), and they aren't going to take us seriously anyway. But I could be wrong - what evidence do you have in mind?

  • A lot of your points are criticisms of blog posts, like "a lot of them don't have citations", or "a lot of them are poorly organized". These are true in many cases. However, if SIAI is considering whether to publish some given idea in paper or blog post form, they could simply spend the (fairly small) effort to write a blog post which was well organized and had citations, thereby making these problems moot.

  • Journal editors obviously aren't perfectly analogous to mob bosses. However, I've heard many stories from academics of authors spending huge amounts of time and effort trying to get stuff published. In the most recent case, which I discussed with a grad student just a few hours ago, it took hundreds of hours, over a full year. If it's usually easy to get around that sort of thing, by just publishing in a different journal, why don't more academics do so?

Comment author: Vaniver 20 June 2012 06:36:00PM 0 points [-]

Agreed and upvoted, but:

Similarly to #8, this is a "heuristic argument" rather than an airtight proof.

Are any of the points airtight proofs?

Comment author: alyssavance 20 June 2012 07:11:35PM 2 points [-]

Not in the mathematical sense, but it's a difference of degree.

Comment author: lukeprog 20 June 2012 06:46:39PM *  24 points [-]

My reply, in the context of Singularity Institute research:

academic papers are a terrible way to discuss Friendly AI

Almost all FAI discussion happens outside of papers. It happens on mailing lists, forums like Less Wrong, email threads, personal conversations, etc. Yesterday I had a three hour discussion about FAI with Eliezer, Paul Christiano, and Anna Salamon where we covered more ground than we possibly could in a 20-page paper because there's so much background material that we all agree on but hasn't been written up. Nobody is waiting around for papers to come out to advance FAI theory; that's not what papers are for.

The time lag is huge

Most SI papers borrow heavily from material that originated from mailing list discussions or LW posts, and most peer-reviewed SI publications are posted in preprint version when they are written instead of months later when they are published by the academic publisher.

Most academic publications are inaccessible outside universities

All SI publications are published on our website, which is open to everyone. Same goes for all of Nick Bostrom's papers.

Virtually no one reads most academic publications.

Not via the journals and academic books themselves, no. That's why SI and FHI publish their papers to our own websites, where they are read by far more people than read them in the journals themselves.

It's very unusual to make successful philosophical arguments in paper form. I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper.

Don't generaltize from one example. I'm slowly surveying a good chunk of the "player characters" in the x-risk reduction space, and a good chunk of them were hugely influenced by Eliezer's two GCR chapters or by Bostrom's Astronomical Waste.

Papers don't have prestige outside a narrow subset of society.

But we care unusually much about that narrow subset of society. Also, I don't write papers so much for prestige as for the fact that it forces me to write in a way that is unusually clear, well-referenced (so that people can check what other people are saying about each individual element), well-structured, careful, and so on. In contrast, people read the Hanson-Yudkowsky debate and there are 5 different ways to interpret every other paragraph and no references by which to check anything and they have no idea what to think.

Getting people to read papers is difficult.

Not as hard as getting them to read The Sequences. Also, many of the people we care about (e.g. me) find it easier to read papers than to read a few blog posts, because papers tend to be clearer written and point the reader to related sources.

Academia selects for conformity.

No problem; there are plenty of journals that are likely to publish the kinds of papers SI publishes, and some already have.

What has been successful, so far, at bringing new people into our community? I haven't analyzed it in depth, but whatever the answer is, the priors are that it will work well again.

As said previously, most FAI discussion still happens outside of papers, but in fact it turns out that several important people did come through Eliezer's and Bostrom's papers.

it's important to note that our current ideas about Friendly AI ... were not developed through papers, but through in-person and mailing list discussions (primarily).

Same goes for all new areas of research. They're developed in person and on mailing lists long before they end up in journal articles.

Academic moderation is both very strict and badly run.

This is sometimes a problem, sometimes not. Communications of the ACM might reject the paper Nick Bostrom and I wrote for it because it's too philosophical and we don't have the space to respond to all common objections. So we may end up publishing it somewhere else. But with my two TSH chapters, all that happened was that I got a bunch of feedback, some of it useful and some of it not, so I incorporated the useful feedback and ignored the useless feedback and published significantly improved papers as a result. Other people I've spoken to about this have reported a similar spread of experiences.

Also see two of my previous posts on the topic, neither of which I agree with anymore: How SIAI could publish in mainstream cognitive science journals and Reasons for SIAI to not publish in mainstream journals.

Comment author: alyssavance 20 June 2012 07:10:06PM *  4 points [-]

Hi Luke! Thanks for replying. Quick counterpoints:

  • Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?

  • If in-person conversations (despite all their limitations) are still the much preferred way to discuss things, instead of papers, that's evidence in favor of papers being bad. (It's also evidence of SIAI being effective, which is great, but that isn't the point under discussion.) If papers were a good discussion forum, there'd be fewer conversations and more papers.

  • If, as you say, the main audience for papers written by SIAI is through SIAI's website and not through the journals themselves, why spend the time and expense and hassle to write them up in journal form? Why not just publish them directly on the site, in (probably) a much more readable format?

  • The problem with conformity in academia isn't that it's impossible to find someplace to publish. You can always find somewhere, given enough effort. The problem is that a) it restricts the sorts of things you can say, b) restricts you, in many cases, to an awkward way of wording things (which I believe you've written about at http://lesswrong.com/lw/4r1/how_siai_could_publish_in_mainstream_cognitive/), and c) it makes academia a less fertile ground for recruiting people. Those are probably in addition to other problems.

  • I agree that we care more about prestige within academia than we do about prestige in almost all similarly sized groups. However, it seems fairly strongly that we aren't going to have that much prestige in academia anyway, given that the main prestige mechanism is elite university affiliations, and most of us don't have those.

  • Which people have come through Eliezer and Bostrom's papers? (That isn't a rhetorical question; given how large our community is compared to Dunbar's number, it's likely there is someone and it's also likely I've missed them, and they might be really cool people to know.)

  • Using my own personal experiences is generalizing from a single dataset, and that's indeed biased in some ways. However, it's very far from generalizing from a single example; it's generalizing from the many thousands of arguments that I've read and accepted at some point in the past. It's still obviously better to use multiple datasets, if you can get them.... but in this case they're difficult to get, because it's hard to know where your friends got all their beliefs.

  • Sure, it's easier to get people to read a single paper than all of the Sequences. But that's a totally unfair comparison: the Sequences are much, much longer, and it's always easier to read something shorter than something longer. How hard would it be to get someone to read a paper, vs. a single Sequence post of equal length, or a bunch of Sequence posts that sum to an equal length?

  • If all new areas of research are developed through in-person conversations and mailing lists, that doesn't imply that papers are a good way to do FAI research; it implies that papers are a bad way to do all those other kinds of research. If what you say is true, then my argument equally well applies to those fields too.

  • Of course, there are some instances of academic moderation being net good rather than net bad. However, to quote of your earlier arguments, "don't generalize from one example". I'm sure that there are some well-moderated journals, just as I'm sure there are Mafia bosses who are really nice helpful guys. However, that doesn't imply that hanging out with Mafia bosses is a good idea.

Why Academic Papers Are A Terrible Discussion Forum

25 alyssavance 20 June 2012 06:15PM

Over the past few months, the Singularity Institute has published many papers on topics related to Friendly AI. It's wonderful that these ideas are getting written up, and it's virtually always better to do something suboptimal than to do nothing. However, I will make the case below that academic papers are a terrible way to discuss Friendly AI, and other ideas in that region of thought space. We need something better.

I won't try to argue that papers aren't worth publishing. There are many reasons to publish papers - prestige in certain communities and promises to grant agencies, for instance - and I haven't looked at them all in detail. However, I think there is a conclusive case that as a discussion forum - a way for ideas to be read by other people, evaluated, spread, criticized, and built on - academic papers fail. Why?

 

1. The time lag is huge; it's measured in months, or even years.

Ideas structured like the Less Wrong Sequences, with large inferential distances between beginning and ending, have huge webs of interdependencies: to read A you have to read B, which means you need to read C, which requires D and E, and on and on and on. Ideas build on each other. Einstein built on Maxwell, who built on Faraday, who built on Newton, who built on Kepler, who built on Galileo and Copernicus.

For this to happen, ideas need to get out there - whether orally or in writing - so others can build on them. The publication cycle for ideas is like the release cycle for software. It determines how quickly you can get feedback, fix mistakes, and then use whatever you've already built to help make the next thing. Most academic papers take months to write up, and then once written up, take more months to publish. Compare that to Less Wrong articles or blog posts, where you can write an essay, get comments within a few hours, and then write up a reply or follow-up the next day.

Of course, some of that extra time lag is that big formal documents are sometimes needed for discussion, and big formal documents take a while. But academic papers aren't just limited by writing and reviewing time - they still fundamentally operate on the schedule of the seventeenth-century Transactions of the Royal Society. When Holden published his critique of the Singularity Institute on Less Wrong, a big formal document, Eliezer could reply with another big formal document in about three weeks.

 

2. Most academic publications are inaccessible outside universities.

This problem is familiar to anyone who's done research outside a university. The ubiquitous journal paywall. People complain about how the New York Times and Wall Street Journal have paywalls, but at least you can pay for them if you really want to. It isn't practical for almost anyone doing research to pay for the articles they need out-of-pocket, since journals commonly charge $30 or more per article, and any serious research project involves dozens or even hundreds of articles. Sure, there are ways to get around the system, and you can try to publish (and get everyone else in your field to publish) in open-access journals, but why introduce a trivial inconvenience?


3. Virtually no one reads most academic publications.

This obviously goes together with point #2, but even within universities, it's rare for papers, dissertations or even books to be read outside a very narrow community. Most people don't regularly read journals outside their field, let alone outside their department. Academic papers are hard to get statistics on, but eg., I was a math major in undergrad, and I can't even understand the titles of most new math papers. More broadly, the print run of most academic books is very small, only a few hundred or so. The average Less Wrong post gets more views than that.

 

4. It's very unusual to make successful philosophical arguments in paper form.

When doing research for Personalized Medicine, I often read papers to discover the results of some experiment. Someone gave drug X to people with disease Y. What were the results? How many were cured? How many had side effects? What were the costs and benefits? All useful information.

However, most recent Singularity Institute papers are neither empirical ("we did experiment X, these are the results") or mathematical ("if you assume A, B, and C, then D and E follow"). Rather, they are philosophical, like Paul Graham's essays. I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper. Books, magazines, blog posts - sure, but papers just don't seem to be a thing.

 

5. Papers don't have prestige outside a narrow subset of society.

Several other arguments here - the time lag, for instance - also apply to books. However, society in general recognizes that writing a book is a noteworthy achievement, especially if it sells well. A successful author, even if not compensated well, is treated a little like a celebrity: media interviews, fan clubs, crazy people writing him letters in green ink, etc. (This is probably related to them not being paid well: in the labor market, payment in social status probably substitutes to a high degree for payment in money, as we see with actors and musicians.)

There's nothing comparable for academic papers. No one ever writes a really successful paper, and then goes on The Daily Show, or gets written up in the New York Times, or gets harassed by crowds of screaming fangirls. (There are a few exceptions, like medicine, but philosophy and computer science are not among them.) Eg., a lot of people are familiar with Ioannidis's paper, Why most published research findings are false. However, he also wrote another paper, a few years earlier, titled Replication validity of genetic association studies. This paper actually has more citations - over 1300 at least count. But not only have we not heard of it, no one else outside the field has either. (Try Googling it, and you'll see what I mean.)

 

6. Getting people to read papers is difficult.

Most intellectual people regularly read books, blogs, newspapers, magazines, and other common forms of memetic transmission. However, it's much less common for people to read papers, and that reduces the affordances that people have for doing so, if they are asking "hey, this thing is a crazy idea, why should I believe it?". Papers are, intentionally, written for an audience of specialists rather than a general interest group, which reduces both the tendency and ability of non-specialists to read them when asked (and also violates the "Explainers shoot high - aim low" rule).

 

7. Academia selects for conformity.

The whole point of tenure is to avoid selecting for conformity - if you have tenure, the theory goes, you can work on whatever you want, without fear of being fired or otherwise punished. However, only a small (and shrinking) number of academics have tenure. In order to make sure fools didn't get tenure, it turns out academia resorted to lots and lots of negative selection. The famous letter by chemistry professor Erick Carreira illustrates some of what the selection pressure is like, similar to medicine or investment banking: there's a single, narrow "track", and people who deviate at any point are pruned. Lee Smolin has written about this phenomenon in string theory, in his famous book The Trouble with Physics.

Things may change in the future, but as it stands now, many ideas like the Singularity are non-conformist, well outside the mainstream. They aren't likely to go very far in an environment where deviations from the norm are seen negatively.

 

8. The current community isn't academic in origin.

This isn't an airtight argument, because it's heuristic - "things which worked well before will probably work again". However, heuristic arguments still have a lot of validity. One of the key purposes of a discussion forum, like Less Wrong or the SL4 list that was, is to get new people with bright ideas interested in the topics under discussion. Academia's track record of getting new people interested isn't that great - of the current Singularity Institute directors and staff, only one (Anna Salamon) has an academic background, and she dropped out of her PhD program to work for SIAI. What has been successful, so far, at bringing new people into our community? I haven't analyzed it in depth, but whatever the answer is, the priors are that it will work well again.

 

9. Our ideas aren't academic in origin.

Similarly to #8, this is a "heuristic argument" rather than an airtight proof. But I still think it's important to note that our current ideas about Friendly AI - any given AI will probably destroy the world, mathematical proof is needed to prevent that, human value is complicated and hard to get right, and so on - were not developed through papers, but through in-person and mailing list discussions (primarily). I'm also not aware of any ideas which came into our community through papers. Even science fiction has a better track record - eg. some of our key concepts originated in Vinge's True Names and Other Dangers. What formats have previously worked well for discussing ideas?

 

10. Papers have a tradition of violating the bottom line rule.

In a classic paper, one starts with the conclusion in the abstract, and then builds up an argument for it in the paper itself. Paul Graham has a fascinating essay on this form of writing, and how it came to be - it ultimately derives from the legal tradition, where one takes a position (guilty or innocent), and then defends it. However, this style of writing violates the bottom line rule. Once something is written on the paper, it is already either right or wrong, no matter what clever arguments you come up with in support of it. This doesn't make it wrong, of course, but it does tend to create a fitness environment where truth isn't selected for, just as Alabama creates a fitness environment where startups aren't selected for.

 

11. Academic moderation is both very strict and badly run.

All forums need some sort of moderation to avoid degenerating. However, academic moderation is very strict by normal standards - in a lot of journals, only a small fraction of submissions get approved. In addition, academic moderation has a large random element, and is just not very good overall; many quality papers get rejected, and many obvious errors slip through.

As if that wasn't enough, most journals are single-blind rather than double-blind. You don't know who the moderators are, but they know who you are, raising the potential for all kinds of obvious unfairness. The most common kind of bias is one that hurts us unusually badly: people from prestigious universities are given a huge leg up, compared to people outside the system.

(This article has been cross-posted to my blog, The Rationalist Conspiracy.)

 

EDIT #1: As Lukeprog notes in the comments, academic papers are not our main discussion forum for FAI ideas. In practice, the main forum is still in-person conversations. However, in-person conversations have critical limitations too, albeit more obvious ones. Some crucial limits are the small number of people who can participate at any one time; the lack of any external record that can be looked up later; the lack of any way to "broadcast" key findings to a larger audience (you can shout, but that's not terribly effective); and the lack of lots of time to think, since each participant in the conversation can't really wait three hours before replying.

EDIT #2: To give a specific example of an alternative forum for FAI discussion, I think the proposal for an AI Risk wiki would solve most of the problems listed here.

Ideas for rationalist meetup topics

15 alyssavance 12 January 2012 05:04AM

(From Zvi Mowshowitz, leader of the New York group, and based on his experience)


Category 1: Discussions - The Base
A: Less Wrong topics. Usually recent posts. Often big hits.
B: Rationalist group planning and organization. Meta-topics. Dealing with group issues on occasion.
C: Spreading Rationality. Discussion of various approaches.
D: Contraband Topics. Discussion of things that I won't include here, but you can guess.
E: How To Do X, or how to do X well. Charity, meeting people, improving skills, relationships, etc.

Category 2: Presentations - Almost Always a Winner
F: Sequences. Andrew Rettek did these for public meetups, went over well I think.
G: Personal Projects: Variance, Geoff Anders's Psychology.
H: General Knowledge: CBT, Starting a Business, Basic Python.
I: Advanced Rationality: Attempted once, successful despite poor execution. Should be explored more.

Category 3: Game Nights
J: Proven Good Games: Illuminati, Poker, Citadels, 7 Wonders, Nomic.
K: Advanced Games: Much demand for general gaming, but not really that rationalist. Proven winners in this group: Vegas Showdown, Power Struggle, Baltimore & Ohio, Tichu, Through the Ages. I can keep going, many good choices. AVOID: Settlers of Catan (personal opinion) and Fluxx (cause you can do better, honestly).
U: Outside Games. Ultimate Frisbee is good.

Category 4: Other Celebrations
L: Karaoke
M: Baraccuda (bring something awesome you love)
N: Contraband Activities (enough said)
O: General Party In or Out / Pot Luck

Category 5: Special Guests
P: Famous Rationalists! If you get Eliezer or Robin or Vassar, you need no other topic. Probably a few others who get there on their own as well, or any generally famous guest.

Category 6: Self-Improvement
Q: Go out and do something to improve social skills.
R: Discuss or map out goals and plans. Set goals. Can be combined with other tasks.
S: Run experiments! For science!
T: Study Hall. Also folds into discussion of self-improvement topics.

Quantified Health Prize Deadline Extended

3 alyssavance 05 January 2012 09:28AM

(Original Post: Announcing the Quantified Health Prize)

I've recently been hired by Personalized Medicine, a new research company trying to bring Less Wrongian rationality to the medical world. We're giving away a $5000 prize for well-researched, well-reasoned presentations that answer the following question: What are the best recommendations for what quantities adults (ages 20-60) should take the important dietary minerals in, and what are the costs and benefits of various amounts?

Entries are now due by January 15th, 2012. This is an update from the original date of December 31st, 2011. However, we will not change this deadline again, and it will be strictly enforced. If you submit your entry on January 16 at 12:01 AM Pacific time, we will not read it.

Why enter the contest? If you have an excellent entry, even if you don’t win the grand prize, you can still win one of four additional cash prizes, you’ll be under consideration for a job as a researcher with our company Personalized Medicine, and you’ll get a leg up in the larger contest we plan to run after this one. You also get to help people get better nutrition and stay healthier.

More info about the contest, and instructions for submitting entries, can be found at the contest website at http://www.medicineispersonal.com/contest/home. Good luck!

How To Spin What Is Spun

0 alyssavance 04 September 2011 04:47AM

In which is described some of the "spin" tricks that an author might use, so that people might defend against them, or use them for themselves when they may.

As a startup founder, as in many other walks of life, it is sometimes necessary to convince people of things. Capital needs to be raised from reluctant investors; deals need to be done with bureaucrats, who would really rather be playing Minesweeper; prices need to be negotiated, reporters persuaded, workers hired and loans secured. A common technique to achieve this is to write material that is, not false exactly, but slanted in a certain direction; written with the aim of getting the reader to agree with a particular point of view.

There are a thousand-and-one tricks one might use to accomplish this. A successful founder might learn, implicitly, to use them well; but, lacking knowledge of the underlying principles of cognitive science, would probably find them hard to explain to others. I am by no means an expert at either, but it seems like these sorts of tricks really ought to be explained, for they have truly become ubiquitous in today's marketing-driven society. So, here it goes.

continue reading »
Comment author: [deleted] 25 April 2011 06:17:39AM 1 point [-]

Anyone planning on coming from the vicinity of Hartford? I'm in West Hartford and I don't drive. I'll buy your pizza!

Comment author: alyssavance 25 April 2011 05:05:16PM 0 points [-]

Cool. This is a sub-optimal alternative compared to driving, but there is frequent Greyhound service between New Haven and Hartford.

New Haven / Southern Connecticut Meetup, Wednesday Apr. 27th 6 PM

5 alyssavance 25 April 2011 04:00AM

View more: Prev | Next