Open thread, Oct. 13 - Oct. 19, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (355)
Should effective altruists donate to fighting Ebola?
Argument against: usually very famous things that make the news are terrible effective altruist causes and you should stick to well-studied things like malaria.
Argument for: Ebola is very underfunded compared to sexier disasters. And it is a disease in the Third World, a category which has brought us most of the best-known effective altruism interventions.
Thoughts: The CDC estimates a best-case scenario of 20,000 cases by January and a worst-case scenario of about 1.5 million cases by January. They do not estimate risks past January. There are also black swan risks in which Ebola spreads to the entire Third World (eg India) and kills tens of millions of people there. However, on the margin individual donations are unlikely to shift the virus from one of these scenarios to another, so it's probably more worth considering how much good the marginal donation does.
Doctors Without Borders is a very well known, GiveWell-approved charity. They are running clinics in the country, but it's hard to tell how much more clinic they can run per dollar. On the other hand, they are also giving out home infection prevention kits by the tens of thousands. Other charities price these at about ten dollars per kit, although I've seen estimates that differ by an order of magnitude. I don't think anybody knows how effective the kits are going to be, although everyone agrees they are a vastly inferior option to sufficient space in hospitals, which at the moment does not exist.
If we estimate likelihood of 100,000 Liberians (geometric mean of estimates) eventually infected = 2% of the population, then $1000 buys 100 kits buys 2 kits for people likely to be infected..
$1000 for malaria bed nets supposedly gives something like 20 to 100 DALYs, depending on whose estimate you trust.
Ebola death rate is about 50%. Suppose the average infected person has 30 DALYs left to lose. So each case of Ebola costs 15 DALYs directly. But it probably ends up costing more like 30, because I think on average each case infects one other person (I don't think this is meant to be iterate, or else the estimate quickly goes to infinity). So if every Ebola kit was 100% effective, we would expect distributing the kits to save 60 DALYs.
That means in order for kits to be as good as the bottom range of estimates for bed nets, they would have to be at least 33% effective in preventing Ebola among people who get them, which they probably aren't.
On the other hand, every number in this estimate is a total wild guess, and I don't trust that I'm within two orders of magnitude of anything approaching reality. Kits likely cost more when including distribution (I expect charities to underreport costs to make people feel good about giving them), there's no guarantee that there's room for more kits, and my rate of how many subsequent cases are caused by each case is from a half-remembered news article. Does anyone have better ideas for how to figure this out?
The absolute numbers are far far below panic-levels but the underreporting and ridiculously-exponential curve is pretty disturbing. It's showing little sign of saturation in the currently-affected populations (an apparent levelling off of infection rates in Liberia was accompanied by reports of difficulty gathering data), it could spread to other populations, and wherever it goes it brings not just ebola but economic disruption, famine, and disruption of health systems that deal with other, more common chronic diseases like malaria and childbirth complications. As of now the measured doubling time is circa 3.5 weeks (a bit longer than that which the worst-case CDC models used but not by much) with each case infecting about two new ones.
The scary possibility is it getting established in additional poor urban populations. It already might be just starting to set off famines where it already is. All exponentials eventually run into a wall and saturate, but it's unclear exactly which walls will do the job here, behavioral or medical or geographc, and exactly where they are and at what order of magnitude they lay. The possibility of thicker spread through larger populations dominates any discussion of the potential effects of the situation.
My feeling is we will know with more certainty the approximate order of magnitude of the issue by Christmas. In the mean time I somehow managed to save a bit recently... money sent. I hope that was paranoid of me.
I have a couple of remarks
1 - Your malaria/ebola comparison must be off somewhere. Here's how I see it: Malaria - $5 to distribute a net, some small chance of preventing a person from getting malaria, some very small chance of fatality from malaria. ~$4k per life saved. Ebola - $10 to distribute a kit, some small chance of preventing a person from getting ebola, 50% fatality.
So the effectiveness of Ebola kits does not need to be anywhere near as effective at preventing infection in order to be equally effective. Ebola kits need only prevent infection ~0.2% of the time to be effective. Note - my calculation probably has some errors and omisions.
For reference, GiveWell's estimates, CDC projections
2 - CDC is modelling the no-intervention case as exponential, doubling every few weeks. Reading their paper might inform an intermediate-case growth trajectory
3 - It's worth considering how many millions are being poured into ebola reduction by national governments and thinking that if these governments are intervening effectively, then our inroads into ebola are likely to be proportionately smaller, according to the smaller size of our monetary contribution.
4 - A recent relevant link, HT Carl
ETA: CellBioGuy gave an excellent response here.
Not meaning to start fear mongering, but since we're talking about highly improbable events, I wonder how probable a mutation is that makes it spread much more effectively via aerosol.
A Finnish official covered this question on the news and her answer was that such a mutation has never been observed, and Ebola is already transmitted effectively enough so that there's no selection pressure for more infectivity.
I'm not worrying about aerosol. That's low probability.
I think it's high probability that ebola will become endemic, at least in Africa. I don't think it's crazy to believe that America doesn't have the institutional conscientiousness to stop it here.
The mildly good news is that I expect a vaccine to be developed.
And just as a side issue, I'm none too pleased to have a disease that's more likely to hit helpful people.
Can you expand on that?
I wonder how this should impact the decision of being helpful under a consequentialist moral system, if at all.
There was enough sloppiness at that hospital in Texas that I'm worried ebola will spread through incompetence. The knowledge and resources might be theoretically available, but they aren't being used adequately.
I suppose this is plausible since even the western medicine is ridden with certain very problematic hospital bugs that spread and are created at least in part through incompetence.
I've seen some terrible hygiene by even some highly regarded professionals. Disinfection is often seen as a nuisance unless you're doing a procedure.
I can personally attest to that you really have to remind yourself that it's important since you have to do it so often.
The first part of her answer is true, the second part is nonsense.
I think it's more likely for a disease that spreads through aerosol to mutate to become as deadly as Ebola. Like with SARS.
Um, diseases are generally under selection pressure to become less deadly, not more.
That just means that the mutant strain won't be as virulent as it otherwise would. It won't keep the mutation from happening.
Yes, and SARS was quite easily contained, a less virulent strain would be even easier.
You're probably right. I think if we're interested in the risk of superbugs we should be interested in the aggregate, not individual pathogens.
It is meant to iterate, which is how we've gotten from one case to thousands already. Biology is all about exponential growth. If each person infected now infects an average of less than one new person, the outbreak dies out. If each person infects more than one, it grows until it has infected everyone within reach, which probably means a fixed proportion of the populations of the affected countries. Pushing that number below 1 is a really big deal.
Givewell's DALY numbers are based entirely on preventing deaths, and do not account for the long term effects on malaria survivors. In addition to the DALYs of chronic disease, there is also lost productivity, which could have long term consequences for the society. If the conditions are (or will be) malthusian, it is better for people to die of acute starvation than that they be subject to chronic malnutrition; it is better that they die of acute diseases like ebola, that they suffer from chronic diseases like malaria.
No. Better medical care encourages women to get less children and therefore reduces population.
Oh, right. Damn real life, always giving us easy outs from the interesting ethical dilemmas.
Better for whom and by which metric?
Well, if they're roughly comparable in terms of moment-to-moment intensity of suffering, then obviously we (utilitarian consequentialists of any sane kind) would rather that a smaller number of people experience it for a brief period (and then die) than that a larger group of people experience it for a long time (and then die). It's not even a Repugnant Conclusion issue, since it's hard to argue that chronically malnourished lives have positive value on the margin.
It's rather easy to argue-- they don't kill themselves, so they presumably think their lives are worth living.
That's an easy argument but I'm not sure it's a correct one. The answers to the following questions may be different:
for at least the following reasons:
Even without taking into account cognitive biases and externalities, that argument only applies to people who don't believe in afterlife: if you thought that by killing yourself you would go to hell and by enduring shit thirty more years you would go to heaven, you'd think twice before picking the former.
So, um, you think that the life of the chronically malnourished consists of nothing but suffering comparable to that of someone dying from acute starvation??
It's not a repugnant conclusion to decide that going around killing the chronically malnourished is a good thing -- are you quite sure?
Not my area of expertise, but for the argument to work they only need to be within maybe an order of magnitude or so. Does that seem plausible?
It certainly sounds repugnant, but I don't think it's equivalent to the Repugnant Conclusion, which is what I was referring to. In any case, you are correct that that would be the logical conclusion to draw from what I said, which is concerning me. Population ethics has always confused me - does anyone have any idea how this one might be resolved?
I think the root issue is that you consider chronically malnourished lives to be not worth living. Is that so?
What kind of confidence interval lies between best case and worst case?
A better question is what kind of a distribution do they think they're looking at -- in particular, whether it's bimodal.
I don't think its worth dealing with Ebola because its not going to get that far. I think Ebola is overhyped. I think worrying about influenza this year and the upper respiratory infection from South America is far more pressing.
I'm late to the thread - just got pointed here from Slate Star Codex. I gave to Doctors Without Borders recently. Your analysis is pretty similar to mine except on two points. Douglas_Knight pretty well covered the first, about iteration. Supposing an infection prevention kit successfully averts an infection, we should not only count the expected half-a-life saved right away, but all the expected subsequent generations of infection that the averted case would have led to, until the end of the current epidemic.
Second, instead of assuming that a prevention kit goes to a random person in the entire population, I think it's more realistic to assume that it goes to the family member of a person who has either the flu, food poisoning, etc, or ebola.
It seems like quite a few people on Less Wrong are interested in improving the quality of their writing. "Writing" obviously covers many different pursuits, and perhaps every unhappy document is unhappy in its own way, but I'd like to share my own frustrations in this area and see if this is similar to others. If it is, maybe we can do something about it.
I can write well enough to get distinctions for undergraduate-level essays, but this doesn't seem like a very high bar. If you can comprehend an essay question, form a reasonably coherent answer to that question, and put forward this answer as a structured argument which the reader can follow, you're pretty much set. I understand these are exactly the features an undergraduate essay is testing for, but I want to be better than that. George Orwell didn't get his work back with "96%, Well Done". He got tears and accolades and enduring respect. While I don't want to be George Orwell, I'm not ashamed to admit I'd like those things.
I've read a few introductory-level books on subjects like written composition and rhetorical technique. It's given me a broader vocabulary to describe what's going on, and a selection of tips, tricks and patterns. I can say what's good about a piece of writing I like, but I can't fit it into an overarching coherent theory. I can steal elements of style that I like, but I don't really know why they're stylish.
I don't know to what extent this is a skill you just have to work at, or a body of knowledge I don't know where to learn. I'm lacking general support and feedback. Is anyone else in this position, and would they like to offer mutual constructive criticism? Alternatively, is anyone a secret gatekeeper of the arcane lore I seek?
Writing is hard. I know you don’t need me to tell you that, but any discussion on writing should begin with that statement. Writing is hard and studying it needs to be treated with the same seriousness as any other “hard” question. After all, not everyone has a book in them and those who do have to make the book themselves; it’s not pre-baked inside their genes.
The best statement on the difficulty of “studying” writing that I have encountered (everyone has at least one) was by Flannery O’Connor. Paraphrased: “Studying writing by discussing point of view or sentence structure or character development is like trying to describe a face by saying where the eyes, nose, and mouth are.” Everyone writes and learns to write differently. Some people, like E.H. White or C.S. Lewis, thrive on the academic, the grammar, the scholarly themes and canon and dissection. Others, like Jack Kerouac and James Joyce, thrive off raw passion and blatant disregard for distraction.
Your comparison between Orwell and undergrad essays is a good one. An undergrad essay has set parameters that dictate what will be in it and how it will be decided whether it has succeeded in its task or not. Orwell’s books have no set parameters. No book is so lucky to have set parameters. Their success depends entirely on the society that accepts them.
For my part (and I am only one person in a massive field), the greatest tool for a learning writer is to write. And not write privately but write things for the consumption of others. I have tried writing privately to “perfect” the projects I work on. Doesn’t work. I just write for myself and make nothing. I’m not saying do like Stephen King and Dean Koontz and abandon quality for output. No, not at all! That’s the anathema of improving your writing ability. After all, I don’t think it’s a lack of talent that keeps Donna Tartt from publishing more than one book a decade. But do not sequester your writing away until it is “ready” or “skilled” or “perfect.” Get it in front of someone, because by coming in contact with other minds your writing will do what its suppose to: act as a tool to convey ideas and emotions. By hitting against other people, you can tell where your tool needs sharpening.
If you would like a critic, I would not mind working as a reader and giving you what I can. I really do enjoy editing quite a bit and currently have a bit of a bet going with some fellow writers so that we can each improve the other’s output. I would not mind doing the same here. Just message me. But, beyond doing the thing, I don’t know of any secret gate to pass through. There is no royal road to writing, so to speak.
Well, there is, but the toll is enormous. Worse than the Florida Sunpass.
Stephen King argues that writer's block is a myth. Is writing still hard if you're willing to just set pen to paper without trying to filter for good ideas? I find this kind of free writing to be almost repulsive to me, but I think it is just a weird bias that I have and a lot of people have but don't ever move past. I know that many of my favorite writers endorse reckless first drafts and brainstorming sessions.
Maybe writing's difficulty is overestimated by the general public, but underestimated by amateur writers? That seems compatible with both our positions.
A related anecdote: Stephen King had writer's block while writing "The Stand." He overcame it by detonating a bomb and killing half the main cast.
I find the bemoaning of so many writers regarding writer's block to be a far less serious issue than they intend me to think it. I won't say I don't believe in writer's block. I'll say my evidence is inconclusive. My personal experience has been that "writer's block" tends to stem from other, less "artsy" problems. Laziness, ennui, angst. Typically, for me, writer's block is overcome by shutting up and writing or by admitting "this project isn't going to work. Restore, restart, or quit."
Writing is hard.
Alright, here's my list of writing resources (in no particular order):
Books:
This is an excellent article about writing:
http://nautil.us/issue/18/genius/shakespeares-genius-is-nonsense
Some more inspiration:
http://www.ribbonfarm.com/2012/01/11/seeking-density-in-the-gonzo-theater/ http://www.ribbonfarm.com/2012/05/03/rediscovering-literacy/
AI luminary Schmidhuber has written about complexity and beauty, and I've found his thoughts helpful:
http://people.idsia.ch/~juergen/beauty.html
My blog is one, long, ungrammatical, rough-draft experiment, for reference, e.g.:
http://meditationstuff.wordpress.com/2014/09/29/what-rationality-actually-looks-like-from-the-inside-4500-words/
http://meditationstuff.wordpress.com/
I don't have a good idea for my writing skill as I don't think I haven't written anything like an essay in the last five years, Otherwise I'm in a similar position.
How about starting a group where every week one person hand in a post and then everyone discusses what can be improved and how to improve it. Maybe the discussion process can even be live?
Thanks for this - it's a very interesting topic. You might want to look into Pinker's Sense of Style, which has been well-received, on this topic (I just started reading it).
It would be great if you could give us an overview of what you've learnt, as a starting-point of further discussion.
Also, I think it's important to know what sort of quality writing you're aspiring at. Good scientific writing is very different from good literary writing, for instance.
Here's a very broad, shallow overview:
Classical rhetoric is a lot like TVTropes, except the tropes have names like "tricolon" and "synechdoche" instead of "Sean Connery is Going to Shoot You". If you've ever noticed a common device that speakers and writers use, it probably has a name in Greek. They serve purposes. You might read a draft of what you've written and think "this sentence sounds weak and lacks impact, but [rhetorical device] is bold and punchy, so I'll construct one and stick it on the end".
There's quite a lot of material available on standard essay structures and essay types for different purposes, (exposition, persuasion, etc.) mostly directed at students. My prototypical "smart person" would probably find 70% of the content in one of these "obvious", but I imagine the missing 30% would vary from person to person.
Grammar and linguistic knowledge are a powerful rhetorical tool. A really obvious example is the idea of the passive voice sounding evasive and blame-shifting, (e.g. "mistakes were made" rather than "Colin made a mistake"). Understanding the mechanics of sentences is useful in constructing them. I'm currently waiting for a textbook to arrive on this specific subject.
Poetic metre and scansion provide a very useful framework for describing verbal aesthetics. The rhythm of spoken and written words induce emotional and attentional affect in the reader. Nice collections of words sound nice, and people will want to read them.
Informal logic seems like a no-brainer, but is still worth mentioning. If you know what an argument is, it's a lot easier to make one.
I want to present ideas both clearly and attractively. As an example, even when Scott Aaronson goes over my head, he's still a pleasure to read, and my confusion about individual points doesn't spill over to confusion about the whole piece.
Thanks. I like the Tolstoy reference above (every unhappy document is unhappy in its own way, etc) and think that this comment highlights the mechanism behind that: that there many different ways in which you can write badly: flawed arguments, poor language, bad structure, and so on.
I think that the best way to improve is detailed feedback. You can learn a fair amount from style books, but only so much, I would guess. Lots of the time, you don't see what mistakes you are making, and need someone else to point them out.
It's important that this feedback is precise: that it tells you exactly what you do wrong and what you could do better, on a sentence by sentence level, as it were. General and vague feedback is not at all as useful as it doesn't tell you what to do in order to improve. I like Christian's proposal below of a writing group where such feedback could be given.
For what it's worth I think you're already a very good writer, but of course everyone could improve. Including Orwell.
Regarding rhetorics I personally prefer texts that don't include too many rhetorical devices such as personal stories, fictive dialogues, and so on, but which instead present the heart of the matter in a precise, structured, and non-roundabout way. Tastes differ here, however.
Relavent: Pinker's lecture at Google on this book.
I work at a small publishing house specialized in medical literature. This year we had an editor who had majored in Latin, and he urged us to bring the style of the classical humanities to our physical sciences niche. For example, he said we should follow Aristotle's rhetorical advice (announce what you'll say, then say it, then said what you just said), and insisted that the appeal to authority was valid because we always had to cite sources.
Eventually he left the company for his own reasons, but this made me think about the different assumptions about writing that people can have, depending on their background. This guy believed any attempt at communication was unavoidably ambiguous because that's the way language works. I try to make my writing efficient and clear because I believe language should be transparent.
Perhaps what you already believe about language will shape what you will strive for in your writing.
I think a transparent style is what you need in medical literature. Repeating yourself certainly aids clarity, as long as you aren't annoying your reader. Giving summaries at the beginning is great. Don't save the "punchline" of the result for the end, that's for literature and some journalism, not anything academic.
I don't know that the classical humanities can lay claim to these ideas though.
My experience is that writing isn't a field that would have an overarching coherent theory - instead it only has an endless selection of "tips, tricks and patterns", as you put it. Becoming a better writer is just about constantly expanding your toolkit of tricks, by being explicitly told tricks, taking apart other writers' work to discover theirs, and experimenting with inventing new ones. Read a lot and write a lot.
(Stein on Writing is my own favorite collection of tricks, covering both fiction and non-fiction writing.)
The best way to improve writing will vary largely by person. Here's what worked/is working for me that I think generalizes better than average:
Also, getting to this level doesn't need to take much. You can get absurd mileage out of "Omit needless words".
Find a person other than yourself who's writing something and would like a beta. Things that you should probably look for: intelligence, similarity in writing goals, divergence in writing style.
Act as a beta reader. Make suggestions liberally. That is, if you see a change you might make and are unsure if it will improve the writing, suggest it; your primary will either accept or reject it. The important part is you discussion points of disagreement.
You can, of course, go at this from the opposite side (as a primary who finds a beta), but my system 1 says the other way is easier. They guy I currently beta for had to put in an absolutely absurd amount of work to get to the point of taking on betas, whereas I just had to leave a review saying "this is pretty good, but you could improve x, y, and z, and you could really use a beta, and I'd be up for it because this story will be awesome if you can clean this stuff up."
Note that although Strunk and White might have some reasonable advice on some topics, many of their recommendations are linguistically ignorant, just plain nonsensical, or violated all the time by excellent writers (including the authors themselves, sometimes on the same page they offer the advice). Here is a well-informed, highly negative review.
I very much agree with your advice about acting as a beta reader. It's really helpful for both parties and gets you lots of brownie points too!
I came to the open thread to add a comment saying "Everyone should read Steven Pinker's The Sense Of Style". I didn't know I'd have the perfect occasion.
There's no overarching coherent theory, but there are ideas both at the detailed level of choosing words and at the higher level of structuring your writing and taking your reader on a journey. I also enjoyed reading it immensely and found it very hard to put down. Everyone here writes, so everyone here should read it!
I'm a little over half way through it, and already willing to endorse it wholeheartedly.
I would recommend writing, regularly and refine the skill. Thats the only way to improve effectively.
Have you read any of Paul Graham's essays? I'm always very impressed by the quality of his writing.
I've read pretty much all of them, but have something of a complicated reaction to them. I think he takes quite an experimental approach to essay-writing, and some of those experiments pay off more than others.
The best feedback will come from seeing how people in your target audience respond to your writing. You don't want to necessarily take on board all of their suggestions, but you do want to see how they seem to receive it in general. I mean, people are good at knowing what they like, but bad at knowing what they would like.
Is writing really as hard as people here make it out to be, or is it an endeavor that makes people set absurdly high standards for themselves?
This is the third pre-submission edit of this comment, so you probably have a very good point.
The standards you set for yourself depend on your goals. If your goal is to be a successful blogger or book author it makes sense to set high standards for yourself.
Freud owns a lot of his influence to good writing skills. We probably wouldn't speak of Darwinism if Darwin wouldn't have written a well written book with "On the Origin of Species". Darwin would be much more one of many different biologists if he hadn't written a book that people enjoy to read.
The C programming language owns much of their success to a well written book.
Your examples of extreme success kind of prove my tongue-in-cheek point.
I think I have some excellent advice for you this time.
I've noticed very recently that in my own writing I tend to optimize for the strength of individual sentences instead of for the strength of paragraphs or arguments as a whole. Because I write one sentence at a time, it's tempting to have each sentence make its point as direct and powerfully as possible. But this is a little bit like playing each note of a song as loudly as possible in an attempt at maximum musical impact. A more skilled performer would play some notes softly, others louder, and use that to emphasize certain ideas over others within the work. I think writing is the same way, and some sentences or paragraphs should be softer or louder than others. The main function of some sentences should be what they do for other sentences, rather than their own arguments. Changing my writing habits in this way will be difficult, but I think eventually highly rewarding.
I don't know whether you have a similar problem or not. But I suspect it's a common one, and hope someone will find this advice useful even if you don't.
Louie Helm wrote for his own online magazine Rockstar Research that a very popular post he wrote a few years ago, Optimal Employment, has directly led to dozens of rationalists find happy work in Australia, one of whom even started a business based upon Louie's model. I was thinking of using it as an example of a positive externality of Less Wrong, via flow-through effects, for a post I was researching. However, my friend user amcknight pointed out in the comments of the original post that several questions and concerns were raised in the comments that Louie, nor anyone else, answer to the satisfaction of the incredulous. For example, if Louie got his math wrong in his Fermi estimates, chain of conjunctive calculations, etc.
I'm having trouble figuring it out for myself, so: what do you think? Were Louie's recommendations sound back then? If they were, are they still sound now? Has the information, or the environment, changed so much that neither the post, nor its recommendations, are still relevant to, e.g., the rationalist community, or average young Americans?
The prediction that it would become a popular thing certainly hasn't come to pass, and at least one of his factual claims at the time was false; I live in the UK and while there were and are people who spent a year or two in Australia, they didn't spend it bartending in the outback and didn't save money.
I think many LW-cluster people, myself included a) are much better at following a plan than being creative b) undervalue themselves. The idea that dozens of LW-cluster people could be given an explicit plan for how to earn above-average discretionary income, follow it, and earn above-average discretionary income, does not surprise me. But I don't think that plan has to be optimal or even above-average for this to work.
And FWIW in my particular field of employment, I've received substantially higher offers when applying for "subfield I have 5 years' experience in" rather than "subfield closely related to the subfield I have 5 years' experience in". I don't know if this continues forever, but I do think that in my early twenties I vastly underestimated the (economic) value of experience. The straight-up career path is alive and well in at least some cases.
In addition to the mixed reaction to Louie's post in the comments section, and your response above, this seems sufficient for me to change how I would include Louie's post. It's popular enough that since it launched a business I will include it as an example of a positive externality of Less Wrong, but it's not an example that will dominate the others. I might contact the business Louie's article inspired directly to ask them what they think of this. I figure this would return information at least as reliable as Louie's estimates, assuming the company in question will be honest.
This looks interesting. Abstract:
Seconded. Damn paywalls.
Huh? The paper is freely available from the link.
I'm currently working through Getting Things Done by David Allen and can recommend it to almost anyone, even if you do not have problems with productivity or organisation as you can get a better understanding of what actually makes you work.
For example I finally found out why I like Evernote: It is a trusted system to just put stuff in, in the jargon of GTD.
I read GTD about four months ago and have started using it at work. It's made an incredible difference by 1) reducing the instances I feel overwhelmed or confused about what to do 2) helping me to better track my primary projects 3) helping me not to forget about small projects or put off the steps to getting them started 4) the system reminds me of what I'm waiting for other people to do, which I never tracked efficiently before.
I read it a while ago but it reading it alone wasn't enough to have an impact for me.
I'm looking for a short story that someone on here wrote.
It was about free will. The story opens with the main characters approaching a space station built by an old race, operated by a custodial AI. The AI keeps predicting what they're going to do and say in advance. The first mate character gets agitated, the captain keeps her cool. Then there's some sort of disaster (approaching enemy fleet?), they have to get out fast, and the captain makes use of the AI's simulations of them to get a good outcome.
Has anyone successfully communicated their philosophy / sense of life to someone through conversations or letters? I'm familiar with projects to do that through blog posts / books / lectures / etc., but am not that familiar with deliberate attempts to do that with feedback. I've had many conversations in life and forums and so on about small issues, and how those issues deal with principles and philosophies and so on, but it seems to me that there are likely to be good strategies for taking advantage of the feedback that conversations allow that I wouldn't think of myself but can imitate.
(It seems to me that people do a lot of sorting by sense of life or philosophy--"that this person doesn't like X or believe Y is indicative of a deep incompatibility"--but it seems to me that if you're trying to communicate on a deep level with a specific person, you need to find differences and then communicate through them instead of just writing the other person off.)
Most of the time success isn't binary. Five years ago I think there might have been people who could fully understand me but today I think I derivative on a lot of more fundamental questions enough that communicating my full philosophy is unrealistic.
I don't need another person to understand me fully. It's perfectly fine that different people have different life philosophies. If you want to effect other people pushing something on them is usually not very effective.
Two weeks ago I basically found that I can fix a long standing health issue by moving upwards when I'm moving forward and downwards when I'm moving back. It took me roughly 2 1/2 years of on average weekly exposure to the idea to fully grasp it so I could actually use it. It fixed an issue that according to doctors I was supposed to live with the rest for my life, so it's a meaningful insight.
Then a week afterwards I go to my somatics group and there a man who has trouble with his left shoulder. It's tense. When he moves forward he moves down instead of upward. I clearly see how that's his problem.
However it's not something fixed by a few words. I do mention the issue but my somatics teacher says (I paraphrase): "Well, yes. However he's not at the point where he can integrate that knowledge."
I'm unable to communicate a single sentence of wisdom in a way that doesn't take months and I was very conscious of that fact. I know the time it took me. I have read the research on phenomenological primitives and know that you can't easily give someone a new one. The idea just takes a sentence but that doesn't help.
It's like the one time I spent hours trying to explain a first year bioinformatics student the concept of recursion. Recursion is a simple concept but explaining it to someone who doesn't have any basis for it is hard. It was before the exam so, the semester informatics A in Haskell didn't succeed in communicating the idea. I got that the course chooses Haskell to make a point of teaching recursion. The prospect of successfully teaching the primitive of recursion made using Haskell, instead of a language that's actually useful, worthwhile.
If you are communicating over a big inferential gap, trying to explain multiple primitives at once to explain your full life philosophy is an impossible undertaking. It's what you are up against if your own life philosophy is sufficiently complex and contains nonstandard ideas.
Feeling bad about that fact isn't helpful. You can't explain a non-mathematician graduate mathematics and hope that they instantly get it. If you have a complex life philosophy it's similar. If you are on LW there a good chance that you do have a complex life philosophy, even if a lot of the ideas seem trivial to yourself.
Giving up the need to have the other person understand you fully opens up a lot of freedom. You can start to listen to the other person. If you see a door that you can open to help a person make an insight you can go for it.
Most of the influential conversations I had that strongly influenced someone else where very targeted. Understanding the other person was key. Carrots don't grow faster if you pull. When doing change work on important beliefs many untargeted pushs create resistance. If a person defends a belief successfully that strengthens the belief.
I agree that one should not pin hopes to changing someone else's life philosophy. The specific reason I'm interested in this is because there are people I want to talk to about what I'm thinking about now, but I realize that I can't do that without talking about what I was thinking about three months ago, and I can't talk about that without... But this gap is just going to increase unless I take deliberate steps to decrease it. (This is exactly what you describe later in your post.)
[edit]And, since this wasn't as specific as it could have been, they don't have to agree with my position; my first goal is them knowing my position well enough to make correct predictions about it. If they like it better they'll move on their own.
I'm also realistic about the timeframes involved. I think it's been about a decade since we were fully philosophically compatible, and taking another decade to close that gap seems like it might be necessary.
I'm reading this as "haste makes waste; if you learn the other person's philosophy, you get credibility for listening and that knowledge lets you avoid the parts with the most resistance, target the parts that make the easiest jumping-off points for explaining your positions or are the most fertile places for your ideas to grow in. Once that's established--and you'll only know it's established when you listen to them and hear that it's taken root--then their philosophy will have shifted and there will be a new easiest spot." Is that the main spirit of it or is there something I missed?
So, the "polymath" thread seems to have ground to a halt. I can't tell whether the discussion just stopped going anywhere (possibly due to elimination of low-hanging fruit), or it dropped off the recent-posts list and people forgot about it, or what. Does anyone have any insight into what's going on?
Lots of good stuff happened there, but it looks like it'll have to be curated fairly actively to continue to make progress, and unfortunately that doesn't fit with my current duties.
If someone else would like to act as a leader for it, I'd be happy for that! In any case, I'm glad we tried it, and thankful that so many people jumped in.
If I recall correctly, most successful polymath projects did in fact have active leaders, such as Timothy Gowers and Terence Tao, who helped to direct the efforts of others. Maybe without such active leadership a project tends to lose its focus. However, I myself do not have sufficient knowledge to take up such task as I do not think I am familiar enough with the research in this area (and at the moment I do not have much time to deeply familiarize myself with it). Nevertheless, thank you for creating that thread. This is precisely the type of threads I would love to see more of on LessWrong.
Dale Carrico on Yudkowsky and Bostrom:
http://amormundi.blogspot.com/2014/10/fluffing-yud.html
This seems to be an example of negative commentary being primarily negative, rather than primarily commentary. One specific concrete claim that stood out to me without needing to unpack:
is, I believe, simply false.
I agree with your overall characterization of the post, but on the specific concrete claim: one of the commenters there cites this article as saying this:
which I think is close enough to DC's claim. I have no way of telling how accurately that article represents EY's position or whether the quotation itself is accurate. Here's EY characterizing another statement in that article as a lie (though for what it's worth I think it can be interpreted consistently with what EY says is the truth -- but of course that doesn't mean it wasn't intended to mislead).
Okay, I've updated somewhat in the direction that Eliezer actually said that at one point. (I was previously assuming that it was a mash-up of other things he's said, but those things were all Sequences-or-later and this article is pre-Sequences.)
With the other statement,
It seems important to note that Eliezer was talking about a program unlike any program that had ever been turned on, when we knew less than we did at the time of writing. Without that detail, it can be interpreted as not-completely-literally-false, but I wouldn't call it truthful. (The fact that Eliezer was not able to say it at the time seems less important, but leaving it out obscures the timeline.) When searching for the source of the "if you take the Singularity seriously" line, I found another comment by Eliezer on the subject: http://sl4.org/archive/0104/1163.html .
I say this with the benefit of hindsight, but just remember that not only Eurisko (the 5% risk program) but also its successors like Cyc which benefit from vastly greater computing power and decades of architecture improvement, fall far far short of being FOOMable AIs.
So if somebody had estimated a 5% risk for Eurisko, and then we saw what actually happened, I would update as them being substantially too paranoid.
I don't think "it didn't even come close" is sufficient to say that 5% was too paranoid.
I know the principles behind an atomic bomb, but I don't know how much U-238 you need for critical mass. If someone takes two fist-sized lumps of U-238 and proposes to smash them together, I'd give... probably ~50% chance of it causing a massive explosion. But I'd also give maybe about 10% probability that you need like ten times as much U-238 as that. If that happens to be the case, I still don't think that 50% is too paranoid, given my current state of knowledge.
There are people who do know how much U-238 you need, and their probability estimate will presumably be close to 0 or close to 1. And today, we can presumably work through the math and point out what the limits of Eurisko are that stop it from FOOMing. But if we hadn't done the math at the time, 5% isn't obviously unreasonable.
Tangential, but: U-238 is fissionable but not fissile; no amount of U-238 will give you a massive explosion if you bang it together. It's U-235 that's the fissile isotope.
(Even banging that together by hand won't give you a massive explosion, though it will give you a moderately large explosion and an extremely lethal dose of radiation: the jargon is "predetonation" or "fizzle". You need to bring a critical mass into existence hard and fast, e.g. by imploding a hollow sphere with explosive lenses, or a partial reaction will blow the pieces apart before criticality really has a chance to get going.)
I don't think I can prove that I'm not coming at it from a hindsight biased perspective.
But I think I can say confidently that today's technology is at least a qualitative leap away from Strong let alone FOOM AI. To make that more clear, I think no currently existing academic, industrial or personal project will achieve Strong AI or FOOM. Concretely:
In the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%
So that's a 2 year period where I estimate the chance of Strong AI or FOOM as substantially less than EY is saying we should have estimated Eurisko's risk of FOOM only in retrospect.
The author admits the quote is only anecdotal, but it does seem plausible to me. EY has said stuff more dumbfounding than that.
More generally, it's just a snarky blog post. Nothing wrong with that; posts are allowed to be snarky. And there's plenty to criticize about uncritical belief in a singularity or the work MIRI does (which this post isn't doing, it's just reminding us of the existence of those criticisms via snark).
Does anyone have interesting ideas for machine learning projects? This would involve obtaining some large dataset and then doing prediction or clustering on it.
I'm doing this as a final project for a college course. Examples of past projects:
I still believe that the spaced repetition dataset produced by memosyne is underanalysed. Gwern put effort into bringing that dataset into a good format.
In particular:
Is there a way to calculate the brain power for an individual day? Reviews made at a high brainpower day would have a higher successful. Having such a metric would be very useful for Quantified Self (QS) purposes because it means that everybody who uses Anki or memosyne daily would get a free QS metric.
Is there a better way to to calculate the interval for cards than the supermemo algorithm that can be proven to be better based on the data from the memosyne set? If so how much better? (If you find something that shows real improvement over the status quo, the next step would be to write an Anki addon that does AB split testing with it)
I think that especially the first task has the chance to provide a decently cited academic paper.
I don't know how much effort either of those tasks takes and whether it would go beyond a project for a college course, but both projects should be interesting and highly useful. You might also find other interesting question when you investigate the data set.
Thanks, that looks great! ~60% probability I'll do my project on this, actually.
The second one actually looks more interesting and useful to me, since it would directly lead to improved scheduling. There's a lot of literature on the spacing effect, and it doesn't look like anyone's actually done empirical analysis of it on this scale before. (And I do a lot of reviews daily, so I wouldn't be surprised if this project actually took negative net time!) There's also disagreement between e.g. Supermemo and Anki about which algorithm is best, so the issue isn't very settled.
The first one (calculating brainpower for a day) seems easy to do to some extent - just look at the average time each review took, or some function of time-per-review and number of reviews. I'm doubtful about whether you could get more reliability out of looking at e.g. card ratings. Perhaps a better way to measure brainpower would be n-back, or Seth Roberts's arithmetic test.
Yes. Wozniak who wrote Supermemo did nearly all his work on his own. I think there a good chance that he missed significant things that are known in 2014 about machine learning in his work. Anki and Mnemosyne are also both written by people without strong knowledge of machine learning.
Maybe some deep learning algorithm is simply better than Wozniaks idea.
The problem is that not every card is similar. If I add 100 new cards in a single day and go through them and they are all relatively easy compared to the cards I usually answer, I will effect the brainpower score if you simply calculate it the easy way. Thinking up a way that's robust to such effect is where it get's tricky.
Doing n-back or arithmetic tests means that you need to spend additional time. I have put quite a lot of thought in the issue and even did arithmetic tests via a self written android app for over a year and I have come to the conclusion that we simply won't get a significant number of people to do this. A lot of people are already doing Anki or Mnemosyne and would get free data without spending additional time or mental effort.
Seth Roberts arithmetic test has the problem that it doesn't tell you how to treat a speed up at the cost of more errors. If I remember right button produced speed up but raised the error rate slightly. The same problem comes with Anki. I often observed speed ups in answering cards that come along with higher error rates.
Long term memory brainpower is also an interesting metric. As far as I know there are no good tests for it. At present psychologists do have tests for short term memory and tests for reaction time and tasks like arithmetic.
Having a good way to measure long term memory brainpower at daily resolution might be useful for research about diseases like alzheimers and detecting it in it's early stages. Researchers cite their tools, so a relevant paper that gives them a metric for long term memory brainpower has a good chance of being cited widely.
Having a brain power metric would also allow for better scheduling. Let's say one day you are ill and your brain isn't working properly. You do you reviews. Various cards which you reviewed successfully don't get the boost that they would get if your brain was working properly.
After a month has passed an algorithm that detects days with low brain power. It detects the problem. It could reduce the intervals of the cards reviewed on that day that aren't already reviewed. That means you get a better scheduling algorithm when you first solve the brainpower issue.
I'm not sure about the effect size but Anki Droid has >1,000,000 downloads. A lot of smart people use Anki from an effective altruism perspective, making smart people more effective is highly useful.
Does anyone know if Schlitz and Wiseman (or anyone replicating) ever figured out what went wrong in their joint experiment on remote staring? My google-fu is failing, and the one link I found doesn't work.
(Paper: link, Yvain writing about it: link, section IV.)
Does anyone have any serious thoughts about anti-Ebola preparations one could take? (Please keep 'it is not a big threat' responses to a minimum - I'm aware of that, but am interested in the question anyway).
By "anti-Ebola preparations" do you actually mean "minimizing your chance of getting infected"?
Take a tent and go solo camping. Somewhere up North :-)
That increases other (personal) risks. However, it answers the original question - though not quite correctly, it does not quite minimize the risk of infection - if you accept increasing other risks as a price for decreasing Ebola-risk, there are (much) more dangerous places to go camping, with (much) less Ebola risk (e.g. abandoned underground salt mine). If you accept increasing other risks beyond any reasonable limits, then the answer (which you might very well get from an optimizing AI) is very simple - shoot yourself. Future risk of Ebola infection - zero :-)
Ordinary disaster-preparedness things - have some emergency canned food and maybe bottled water (get a fire extinguisher while you're at it - not for ebola, but it's more likely to save your life). Get in the habit of taking your temperature daily, in a consistent way and at a consistent point in your routine, and plug it into some quantified self thing, then you'll have an idea of what's normal and be able to spot a fever sooner. (If your toilet allows, a habit of inspecting your poop can also give you early warning of irregularities). Getting a flu shot will lower your odds of a false alarm. Wash your hands frequently, especially after shaking hands with anyone (or getting their bodily fluids on you, but that's hopefully common sense); wear gloves in public places if you can get away with it; be sure to keep any open wounds covered, even if they're small (and obv. don't have sex, especially with promiscuous people). Work from home if you can, avoid crowds, and especially avoid aeroplanes (bear in mind this is actively bad for your odds of not dying; even during an ebola scare, driving is far far far more dangerous than, well, pretty much anything).
This could be a useful habit anyways. Very roughly speaking: gray poop - liver, floaty greasy poop - pancreas, undigested poop - small intestine, black poop - proximal bleeding, bloody poop - distal bleeding, watery poop - lactose intolerance or inflammation, rock solid poop - diet lacking fiber.
Try to encourage getting it stopped and stamped out at the center of infection, Western Africa, where it is being maintained in the population, expanding and throwing off infected travelers. Money and political pressure are the presumed ways to do this, in the absence of direct medical/epidemiological skills in the efforts against it.
Also encouraging your government to stop flights to the affected areas if it hasn't done so already.
Is there convincing evidence either way on Speed Reading? Some people swear by it, others claim that it doesn't actually provide an improvement over skimming.
This is only an anecdote, but I've always been an extremely slow reader, but worked hard to fully comprehend everything on the first read-through (at least for subjects that weren't extremely subtle and required lots of time to chew over). An example of this is that when I took AP U.S. History, I could just read the textbook once and ace the tests. This isn't just about having a photographic memory (which I don't have), this is also about synthesizing facts into patterns and ideas as I read. I find this very helpful and do the same thing while following whiteboard talks (except I'm apparently a much faster verbal learner, or at least I don't have trouble following talks in real time at all).
I'm not sure what direction this anecdote points in, but at the very least I'd personally be afraid to do speed-reading because it would mess up a pretty good system I already have in place.
I've experienced the same when reading philosophy literature for a class. With my slow reading I was also able to remember individual quotes and their locations well enough to retrieve them pretty well, and both write essays and answer multiple choice tests. I attribute this to taking as much time as I needed on a readthrough to "digest" the text and even pause and mull it over.
There seems to be a moderately hard speed-comprehension tradeoff curve. A few techniques might shift the curve outward, letting you have more speed and more comprehension, but mostly they give you more flexibility to choose where you want to be on the curve.
This matches my experience. Speed reading software like Textcelerator is nice when I want to go through a fluff story at 1200 WPM, but anything remotely technical requires me to be at 400-600 at most, and speedreading does not fundamentally affect this limit.
Reading technical material at 600 WPM would still be much faster than the average person.
True. I've always read things around that speed by default, though, so it's not related to speedreading techniques, and I don't know how to improve the average person's default speed.
"default" is a deceptive word. You probably didn't read at that speed when you where 10 years old. Somewhere along the lines you learned it. Given that you learned it and don't know how you learned it, there also no good reason to assume that you are at the maximum that's possible.
Post-Singularity Worldbuilding Quirks?
If you take it as given that...
... then what random background details might result that are both...
The reason I ask: I'm writing a story in such a setting, and am hoping to tap into the local hivemind to, possibly, help flesh out some of the background sections - ones not directly relevant to the plot, but which imply a greater depth of worldbuilding - that I wouldn't have thought of on my own. As possible examples: particular species that a weakly superintelligent post-human AI might have decided to wipe out, such as cruciferous vegetables; new species that similar WSPHAIs might have decided to let loose, such as snake varieties with interesting chemicals to be milked of their venom, or a de novo species resembling sparrow-mouse griffons; cultural quirks in one relatively isolated area or another, such as a seemingly ordinary group decision that masks are terribly comfortable to wear; odd aspects of language development after a few decades...
What comes to your mind?
I'm not sure what this means exactly. Are they returning space explorers who are surprised by recent developments (a la Planet of the Apes)? Luddite survivors who experienced the transition and rejected it? Members of an uncontacted tribe or some primitive culture with ethnographic boundaries respected by the machine? Each of these will interpret a post-singularity world in a very different way, I think.
'Singularity' is code for 'we don't know', so as a writer you're permitted just about anything. But the most fun I've had with post-singularity fiction is when there is a dominant singleton with running themes and strong personality quirks- The Optimalverse is the reigning champion here, in my opinion, but there's also the famous I Have No Mouth and I Must Scream. Gods are fun to read about when they're mad in some way, or at least when they seem mad from a human perspective. So it's worth thinking particularly about the forces (that is, the choices and personalities) that give internal structure to your post-Singularity world. Randomness is not compelling.
Aside from that, my advice would be to avoid anything that is too much a fantasy trope. Try not to get in to the habit of thoughts like "It's a dragon, except [x] is different." Make sure it's your world that's driving these things, and not your genre.
Actual examples, as requested- although these probably suffer from being 'too random' since I don't know anything about your world:
Exactly 1.4 trillion biological humans, cloned with some variations, buried underground in cryogenic stasis near the Mohorovicic Discontinuity, on a timer to wake up automatically in ten million years. One of many contingency plans in case of catastrophe. They have never been conscious, but there is a kind of dream.
A machine intelligence (or the machine intelligence) has begun to redirect comets and asteroids from the Oort Cloud to collide with Venus. Most people assume this is part of a terraforming effort, but that theory fails to explain why the collisions always occur in groups of three equidistant points along a great circle.
There is a handful of bipedal, roughly humanoid robots walking across Asia. They walk in a straight line, climbing directly over any terrain features to avoid deviating from the path. Any time they encounter a prepubescent human, they ask for her to give them directions, and will change course to whichever direction she points. Each is separated form the others, and seem incapable of acknowledging their existence.
In Antarctica, there is currently a replica of 17th century Paris carved entirely out of ice, detailed down to the level of individual ice cobblestones and ice candles with frozen flames in ice chandeliers. Last year it was 20th-century Jakarta, and the year before that Beijing. As the year progresses, the replica changes subtly as if it were lived in; furniture moves, ships-of-the-line are slowly completed, footprints appear. Nobody has yet taken responsibility.
I'm letting myself be inspired by Robin Hanson in a number of aspects, and had the intelligence explosion focused in high-population and urban areas, with the human survivors being those who avoided being in a city during the critical period.
I'm not sure I could justify "trillions", given what I've established for the setting so far; but for a more modest number, this is quite possible. (In fact, it's a variation on an idea my protagonist once had, but never had the resources to attempt; though that version of the idea included staggered release times.)
I've had a Kessler Cascade turn the orbitals into a death trap for anything trying to leave Earth, partly for a narrative level to avoid self-replicating Von Neumann things in space overshadowing everything my planet-bound protagonist could even attempt, and partly in-setting as a result of the conflicts that arose during the Singularity.
Ah, now these I could use almost without alteration, and, at least as importantly, as springboards for further ideas. :)
Non-Linnaean wildlife. Built de novo by the superintelligence; made of the same sorts of organic materials as normal species, but not related to them; possibly not nucleic-acid based/non-reproductive. Their inner workings are simpler and more efficient; no symbiotic mitochondria and chloroplasts, but rather purpose-built modules. They are edible, and the survivors know the unique taste of, e.g., their 'muscle' tissue, which is not actin/myosin based.
Don't think we need a superintelligence for that.
Interesting! But while we're a lot closer than I realized, we probably aren't going to be thoroughly out-designing evolution from the bottom up on macroscopic animal-like creatures any time soon.
Depends on the criteria of "out-designing". If they are something evolution had never any reason to optimize for (e.g. lots of tasty-for-humans meat fast), I don't see why not.
I think "from the bottom up" is the hard criterion. We can fiddle with the knobs evolution has produced, but it doesn't sound like we have the insight to replace basic building blocks like mitochondria and [dr]na.
Well, how deep is your bottom? You said "made of the same sorts of organic materials as normal species", so did you just mean carbon-based chemistry? something that depends on slow room-temperature reactions in liquids and gels?
You want something different, but not too different (like a metal-based robot), so what's the Goldilocks distance from plain old regular life?
I think my Goldilocks range is along the lines of 'probably made of proteins and lipids and such; preferable edible or at least biodegradable by ordinary bacteria (I don't know what this requires); a human non-biologist without tools could mistake it for normal'.
But it's pretty interesting to think about possibilities at other ranges, too.
Vinge's Marooned in Realtime comes to mind. The survivor's tech is close to what the singularity level was, but they "missed" the singularity and aren't improving their tech over the timescale of the story because of low population and other priorities.
"Missing" an intelligence explosion would be hard, if it drastically optimizes the solar system. In Vinge, this works because the exploding society simply disappears - implying that they're off in higher dimensions or femtotech or something. Other examples would be if the survivors are being simulated - there's a great story whose author I forget, about someone waking up after the singularity because he was an early brain scan and they just fixed him up now.
The whole point of the concept of singularity is that we don't know what will happen afterwards.
Some things, however, are less plausible than others.
In fiction, you have to make it up, but you can't make it something implausible.
But any real scenario will seem implausible. That's what the idea of singularity is about. If you believe that you can predict in any sense how the world will look like afterwards "singularity" is a very poor term to use.
I think it is a poor term.
Still, it can only mean 'a whole lot less predictable than usual', not 'MAX ENTROPY ALL THE TIME'. Physics will still apply. Given that people survived and have lives worth writing stories about, we are at least within 'critical failure' distance of friendliness in AI. That narrows things very considerably.
A lot of the unpredictability of the singularity arises from a lack of proof of friendliness. One you've cleared that (or nearly), the range of possibilities isn't singular in nature.
If there's nothing I can write that wouldn't break your Willing Suspension of Disbelief about events after an intelligence explosion, then there's nothing I can write to do that, and nothing you can suggest to add to my story's background; and both our times might be spent more productively (by our own standards) if we focus on our respective projects.
With high probability we do, unfortunately.
With high probability there won't be any humans afterwards but that doesn't tell you how the world would look like.
Disagree, since over 99% of what I care about would be the same across all post-singularity states that lack lifeforms I care about. Analogously, if I knew that tomorrow I would be killed and have some randomly selected number written on my chest I would believe that today I knew everything important about my personal future.
If you want to tell a story about that would, than you need to know something about how the world looks like besides "there are no humans".
We also don't know what will have happened by 200 years from now (singularity or no singularity), but that is no obstacle to writing science fiction set 200 years in the future.
Steven Poole criticises doubters of human rationality by lauding the virtues of "public reason", which supposedly ensures that "any one thinker can be corrected". It is true that collaborative and, indeed, disputatious reasoning is vital - and the "nudge" theorists he snipes at have never impressed me - but the idea that our societies are efficient self-correcting organisms is plain false. Some influential people think that climate change is a dire threat, for example, and others that it is a mere sham. Some think that state redistribution is key to a functioning society and others that the state is an abomination. Some influential people think that Gods exist and others that there is nothing beyond the material of life. To make sense of the world, intelligent people have to use their own powers of reasoning, and should be aware of their limitations.
I've just enrolled in a 1 year applied mathematics Master's program. The program is easy, and I'm mostly doing it because it costs me nothing and a Master's degree is a good asset to have. I plan on working full time and not attending any classes, and I'm certain I still won't have any problems there.
However, coming from a CE background, I have no idea what to do for my thesis. I want it to be something from the fields of AI or Probability/Statistics, but I'm out of ideas. So, any suggestions as to what may be either fun or useful (preferably both) in those areas, that I should dedicate my spare time to?
If you want a solid year-long project, find a statistical model you like and figure out how to do inference in it with variational Bayes. If this has been done, change finite parts of the model into infinite ones until you reach novelty or the model is no longer recognizable/tractable. At that point, either try a new model or instead try to make the VB inference online or parallelizable. Maybe target a NIPS-style paper and a ~30-page technical report in addition to whatever your thesis will look like.
And attend a machine learning class, if offered. There's a lot of lore in that field and you'll miss out if you do the read-the-book-work-each-problem thing that is alleged to work in math.
I did some machine learning in previous studies, and read up on some online, so I have a basis in that. Taking Advanced Statistics, and AI (maths part) courses, and a few less relevant ones.
I plan on doing it in two years, one for the courses, one for the thesis, so a yearlong project is acceptable. However, I'll also have a full time job, and a hobby or two, and a relationship. The suggestions sound great, and I'll dedicate a few days to study them carefully. Thank you very much.
The Less Wrong Wiki is a valuable resource. Since Eliezer Yudkowsky's original sequences were completed, the Less Wrong community has changed much. This thread is to be used to voice updates Less Wrong users would like to be made to the WIki, especially so it's easier to use it as a reference for introducing a new concept from Less Wrong for the first time.
If you want to start a thread for that task, a discussion post probably makes more sense than a post in the open thread that isn't as long living.
Who gets what value from the wiki?
For a time, I adopted the Special Threads page, because it looked like a valuable resource. Eventually it grew to be too much of a hassle, and after that code changes made it less useful (because now there are links to the latest open thread and rationality diary and quotes thread and so on). There have been a lot of attempts to build up various parts of the wiki as useful, but I don't think all that much of it is actively maintained, and the parts of it I use the most seem to be metaposts (sequence lists, etc.).
Several things come to mind here.
A while ago Louie Helm recommended buying Darkcoins. After he did the price of a darkcoin went up to more than 10$, but now it's down to 2$. Is it still a good idea to buy darkcoins, that is, is their price likely to go back up?
Honestly I doubt cryptocurrencies are any better than a random walk (unless you have some special foreknowledge of some extra attention the currency is about to get.)
If you have 50% of tripling your money and 50% of losing everything that's still a very good bet. It's difficult to say whether Darkcoins have that property.
have you noticed people adopting the lesswrong terminology and inaccurately priding themselves on being "sane"?
I haven't noticed anyone doing that, but it does sound like the sort of thing people would do. This can hardly be avoided, though, unless we were to entirely decouple status from all superficial characteristics.
Have you? If you have, would this tell us more about LW or human psychology? If the latter, then what's the news?
Sanity isn't binary and YMMV.
I'm trying to make Christmas travel arrangements to London, along with a family member who's somewhat spooked by the Ebola thing. I'm ~80% sure that the risk is negligible. I base this mostly on the prior that it's the current media panic and current media panics can usually be ignored, plus a cursory look at the number and location of cases (in particular, nothing in the U.K. yet, although apparently there's some expert noise worrying about the possibility).
Short answer: Yes.
Let's go through a Fermi estimate. According to Wikipedia there have been only a small handful of Ebola cases in the First World during this outbreak, almost exclusively among people who'd been volunteering in West Africa on missionary or health care assignments. (There has, however, been one case of local transmission in Texas.) Let's be generous and say 20 people with the disease flew in or out of the US over the last month. Now, there are about two million air passengers per day in the US, of which I'm guessing about a quarter are on international flights; that works out to 15 million international passengers over the same month.
Ebola takes close contact to be transmitted; it's not an airborne disease. Since you'll probably be sitting next to each other, that means you'll each only be coming into close contact with one other person on each flight. Let's say that, if they're infectious, that person has a 20% chance of giving you Ebola over the course of the flight (probably an overestimate), and that a case of Ebola in the First World has a 50% chance of killing you. Combined with the ratio of infected travelers we worked out earlier, that means that your chance of dying from Ebola contracted on each leg of the trip is about one in 7.5 million.
That's about as dangerous as driving 30 miles in a car, or a bit less than a mile on a motorbike. And I'm making a number of simplifications that almost certainly inflate the risk: your route for example doesn't go anywhere near West Africa, assuming you're starting near Atlanta as your profile would suggest.
What is it that your family member sees as dangerously increasing the risk of getting Ebola?
I am far from being an expert and you shouldn't trust me, but my handwavy judgement is the same as yours.
Possibly relevant Sequence article about the availability heuristic.
What's the best way to get (U.S.) legal advice on a weird, novel issue (one that would require research and cleverness to address well)? Paid or unpaid, in person or remotely.
(For that matter, if anyone happens to be interested in donating good legal advice to a weird, novel non-profit organization, feel free to contact me at histocrat at gmail dot com).
It probably includes finding a person with expertise on the subject matter. That means it's easier if you reduce the level of abstractness and specify the issue at least a bit.
I'm happy to specify completely, actually, I just figured a general question would lead to answers that are more useful to the community.
In my case, I'm helping to set up an organization to divert money away from major party U.S. campaign funds and to efficient charities. The idea is that if I donate $100 to the Democratic Party, and you donate $200 to the Republican party (or to their nominees for President, say), the net marginal effect on the election is very similar to if you'd donated $100 and I've donated nothing; $100 from each of us is being canceled out. So we're going to make a site where people can donate to either of two opposing causes, we'll hold it in escrow for a little, and then at a preset time the money that would be canceling out goes to a GiveWell charity instead. So if we get $5000 in donations for the Democrats and $2000 for Republicans, the Democrats get $3000 and the neutral charity gets $4000. From an individual donor's point of view, each dollar you donate will either become a dollar for your side, or take away a dollar from the opposing side.
This obviously steps into a lot of election law, so that's probably the expertise I'll be looking for. We also need to figure out what type of organization(s) we need to be: it seems ideal to incorporate as a 501c(3) just so that people can make tax-deductible donations to us (whether donations made through us that end up going to charity can be tax-deductible is another issue). I think the spirit of the regulations should permit that, but I am not a lawyer and I've heard conflicting opinions on whether the letter of the law does.
And those issues aside, I feel like there could be more legal gotchas that I'm not anticipating to do with Handling Other People's Money.
I think you might be underestimating the amount of money in politics that comes from large organized contributors who give money to both parties for purposes of making the system in general beholden to them rather than favoring one ideology over the other.
While some money does follow that road, not all money does.
I think those contributors will probably not be our main demographic, since they have an interest in the system as it is and don't want to risk disrupting it. In theory, though, donating to both parties can be modeled as a costly signal (the implied threat is that if you displease me, the next election I'll only donate to your opponent), and there's no reason you can't do that through our site.
You should probably chat with Sai, of Make Your Laws. (http://s.ai/) He's spent a bunch of time recently petitioning the FEC to answer questions about various crazy ways his organization would like to funnel donations. (Specific technical questions, like: "If someone gives us a donation whose recipient is conditional on a condition that won't be known until 6 months from now, [question about how some regulation applies].") I bet he can at least help you find answers.
Sai once gave a talk advertising LessWrong at the Chaos Computer Congress (CCC) in Berlin.
At the flight to Berlin he just boarded the plane with fruit juice. He got it on the fruit by declaring it as a medical drug that he needs to keep his glucose level up. He said he knew the TSA rules better than the TSA folks. Then he asked how he can listen to the cockpit radio and got kicked out.
He took the next plane and allegedly took undetected enough pure caffeine with him to blow up the plane and allegedly told the crowd at the CCC about it which was probably a dumb move.
While he's no lawyer by trade, I think he knows very well to navigate the rules and is likely supportive of creative projects like this.
Is caffeine explosive or did you mean to type some other word instead?
It's been four years so my memory might be faulty but if I remember right it was caffeine. Normal dosage of caffeine that Wikipedia lists is 500 milligram.
For obvious reasons the part where he allegedly spoke about it isn't in the video. I allegedly told the audience that he will demonstrate the explosive capabilities of the substance later that day. Unfortunately there was some official of some agency in the audience that didn't find this funny and who then walked around with recording equipment to record any further word that Sai said, so Sai didn't go into further details.
It's illegal to carry something on a plane that can blow up the plane but obviously TSA rules can't check for every possible substance that's explosive. The idea that fluids are the only thing that's explosive is obviously also mistaken. Disclosing security vulnerabilities is very much in the spirit of the Chaos Computer Congress. So it was the substance he choose when they illegally forbid his fruit juice (and he sued them for not allowing him fruit juice with he carries with him for health reasons, a bit later)
For me Sai was a very impressive character.
Thanks, I'll look him up.
I recommend crossposting the request for information to http://www.effective-altruism.com/ . Maybe someone knows someone who can help. It's worthwhile to spread the request that many people see it.
After thinking about the issue a bit, an edge case that's worth to think about: What happens when someone donates personally X amount of money to a party and then donates Y money via your process and X+Y are together more than the maximum donation allowable?
It seems to be implicit in your model that funding for political parties is a negative-sum arms race. This is starkly at odds with much of political thinking, which sees funding for political parties as a positive-sum game. This is expressed by public subsidies for political parties, in such terms as public funding/matching funding/tax deductibility of political donations, depending on where you reside.
Political parties turn funding into votes by getting their message out to voters, so the more funding political parties have, the better informed an electorate we will have. Moreover, to the extent that funding getting your message out becomes less binding of a constraint, then other constraints (such as the persuasiveness of that message) will become more binding - which seems like a good thing.
I guess it just goes to show that one person's public good is another person's public nuisance. In my own view, the most damaging negative-sum arms race is academia. Perhaps you will inspire me to set up my own 501c(3) to allow matching donations to universities to be diverted to political parties.
Consider the incentives for people who express this "political thinking".
More political agitprop does not necessarily lead to more informed voters. Is there any real-world data on the relationship between political campaign spending and voter knowledge (once you pass the very low bar of "oh, there is an election and X, Y, and Z are on the ballot")?
P.S. Analogous reasoning would argue for public funding of advertising as leading to "more informed" consumers who could make better choices about what to buy X-D
Well, for starters it helps to also have some information about who X, Y, and Z are.
I definitely agree with the line of argument that advertising is a public good, because it leads to more informed consumers, and I am highly sceptical of knee-jerk claims that it is a negative-sum arms race. So at least we're both consistent!
However, I don't think that advertising (whether commercial or political) should be subsidised, because I think the government is very bad at encouraging public goods. My point was merely that HonoreDB's charity, although no doubt well intentioned, appears to me to be destroying value, rather than creating it...
Consider your own analogy to commercial advertising. Suppose Coke and Pepsi signed a compact to reduce their advertising expenditures by a specified amount; would you suppose that to be good or bad for the consumer?
I don't think that agitprop and/or advertising leads to more informed voters/consumers because its purpose is not to inform. Its purpose is to manipulate, to force the subject to a certain opinion by all means necessary. Any "informing" that happens is entirely coincidental and, depending on the circumstances, could be considered a feature or a bug.
In local terminology, advertising tries to change the map in your head and the main feature of the one it wants to install is that it shows all paths leading to the same place, the one conclusion that it wants you to make. An accurate map is bad from the advertising point of view and needs to be replaced. In the service of this goal the advertisers can and do use biases and fallacies, they spin, mislead, and obfuscate, and on occasion just lie.
Economically -- good. Psychologically -- I don't know. People like to be told what to prefer :-/
Wow. Let's just say we're very far apart on this.
There's a wealth of law and economics literature about the effect of advertising, which demonstrates that advertising bans hurt consumers and help producers - see for example this classic. An agreement within a cartel isn't the same as a legal ban, but we should surely expect it to have a similar effect - especially given that many real-world advertising bans were lobbied for by major incumbents. Do you have any rationale for why you think consumers would actually benefit?
I was inviting you to consider what I considered an obvious cartel behaviour aimed at suppressing consumer ability to get the best deal. But bravo on biting the bullet!
First, we were not talking about legal bans (which I am generally not in favor of).
Second, you have to be quite careful here not to confuse "advertising" and "intensity of competition". I have no doubts that reducing the competition hurts consumers, but I am not convinced that reducing advertising expenditures necessarily leads to reduced competition. I suspect that these two things are often conflated (and the causation flipped).
In this particular case, do you think that if both Coke and Pepsi reduce their advertising budgets by, say, $10m each, the consumer will be hurt economically? What is the mechanism for that?
Third, are you implicitly claiming that the current level of advertising expenses is optimal? If we accept your thesis and start to increase advertising, will there be some point when the curve bends -- the advertising becomes excessive? Presumably so. Where are we with respect to this point? How do you know?
Plain-vanilla cost savings some which will be passed on to consumers.
Huh? I walk into a supermarket and look at the prices of Coke and Pepsi which are there side by side. I know from experience to which degree I prefer one over another. How will advertising help me get the best deal?
Glad to hear it. Do you agree with the wealth of literature showing that bans on advertising are bad for the consumer? And do you agree that a binding agreement within a duopoly would have a similar effect to a legal ban?
Yes, I think the consumer would be hurt. Advertising alerts us to new products, changes to existing products, and changes in the terms (eg price) under which those products are sold. Let me give you two examples of Coke/Pepsi advertising and how it affects me.
Where I live, Coke produces a wide variety of products, and is constantly adding more. Currently, they are heavily advertising their new "Coke Life" product, which has a different kind of sweetener, and a slightly different taste. If Coke had a smaller advertising budget, fewer consumers would be aware of this new product and what it's about, resulting in loss of the potential consumer surplus from drinking the new product among those who prefer it to other Coke or Pepsi products.
In addition, Coke frequently has promotional offers on. Just walking into the supermarket and look at the prices is inadequate, I specifically go there to buy Coke because of the promotional offer. Otherwise I might miss out. And I know about the promotional offer because of advertising. In the absence of this, consumers would have to go to the supermarket on a much-more-frequent basis, just to check the price of Coke. This would be a loss.
I am claiming that, given that the current level of Coke vs Pepsi advertising is the result of adversarial competition in a free market, I think there's a very heavy burden on people who claim it's "too high" (or "too low"). I am not claiming that it's "optimal" by everyone's idiosyncratic criteria.
Why on earth would the cost savings be passed on to consumers? Do you think Coke or Pepsi is sold at marginal cost? This is a market with unique products and partial substitution, so these companies are price-setters, not price-takers. This saving would just increase their profits.
It probably is on the margin. I'd guess that, while if both parties received 99% less donations there might be some kind of adverse effect, if both parties received epsilon less donations the effect would be of order epsilon squared or smaller.
That the ratios of the marginal benefits of a dollar for the two parties are 1:1 is not at all obvious.
Sure, but it's really hard to anticipate which side will benefit more, so in expected value they're equal. I'm sure some people will think their side will be more effective in how it spends money...I'll try to persuade them to take the outside view.
I wonder why wanting or having something in the "wrong" century allegedly makes you a morally bad person now; but when the thing you want arrives, works and enough people have or use it to make it socially normalized, they accept it as part of their current standard of living and don't go around disapproving of each other for possessing it.
For example, I've noticed a ramping up lately of propaganda against those horrible people called "billionaires." I would call today's billionaires the early adopters of future living standards, assuming that we continue to have exponential economic growth. In a few centuries, people with the equivalent purchasing power of today's billionaires would probably consider themselves "middle class." We find this idea in science fiction, for example, where "middle class" people in imaginary future societies own vast estates, have staffs of robots to do their bidding, fly private space ships and so forth. The really wealthy people, by contrast, can buy entire planets as their personal property. A. Bertram Chandler provides an example of this idea in his science fiction novel, To Prime the Pump, though no doubt you can think of other examples. '
I've noticed something similar regarding the public's perception of cryonicists. They call us selfish and narcissistic, for example. (People even wrote this about Kim Suozzi.) We just want a standard of health care that doesn't exist in our century, but we might have the ability to reach it via cryostasis to, say the 24th Century. How does that make us selfish or narcissistic, instead of, say, "visionary"? If the people in the 24th Century have rejuvenation, revival from cryo- and other forms of bio-stasis, radical life extension and so forth, and enough of them take advantage of these technologies to normalize them socially, then they won't go around calling each other selfish and narcissistic for benefiting from what they consider the current standard of medicine.
In other words, people who resent today's billionaires and cryonicists haven't thought deeply on the timing issue. Though that makes sense if they expect to clock out any time now, and they don't think they can do anything about it.
I don't think this is a good example of the broader phenomenon you are describing. When people criticize the very wealthy, they're primarily making a criticism about relative, not absolute standards of living. I.e. "It is a sin to have so much when others have so little." I wouldn't say this is the only criticism, because I have seen, for example, criticisms of people owning mansions when they have small families (since it creates enormous upkeep costs and the unused rooms have basically no value except as a positional good). But that's the exception; I don't think anyone would consider owning a Maserati immoral (at least on grounds of wealth rather than environmentalism) if there weren't also people struggling to pay for basic necessities.
My impression is that what billionaires are resented and propagandized against for is mostly not lifestyle advantages like having huge houses, private jets, the option of not working for a living, etc., but two other things.
What these have in common is that the anger is aimed not at the lifestyle of the super-rich -- at all the nice things they are able to have -- but at its (alleged) negative impact on others.
If indeed the economy continues to grow exponentially for a few centuries, and if scientific and engineering progress continue strongly, then indeed everyone may end up with a lot of the lifestyle elements that only the very rich have now. But it won't be possible (by definition) for everyone to have far-more-than-average influence on public policy, or for everyone to get wealthy by making people in general poor. And those, not the luxurious lifestyle, are what billionaires get attacked for.
Now, of course it's possible that what really motivates any given criticism of very rich people is good old-fashioned envy (or, in the case of the financial elites, sometimes good old-fashioned anti-Semitism) and that all the stuff about buying influence and causing economic disaster is just an excuse. But it seems at least worth considering the possibility that the people making those criticisms mean what they say.
I think it's pretty safe to dismiss "narcissistic" on the grounds you do. But a case can be made for "selfish" -- not really any better a case than can be made for pretty much everyone, but I'd say that pretty much everyone is in fact selfish :-). Specifically, suppose you choose for $100k of your money to be spent on cryopreservation at the moment of (what non-cryonicists would consider to be) your death. That's money that would otherwise have been inherited by others. So you're choosing to use that money to give yourself a (probably small) chance of surviving into the far future, rather than to benefit those others.
Most of us spend most of our money to benefit ourselves (or in some cases our close family) rather than others. Most of us could do a lot more good to the world by spending a lot less on ourselves. So I think the same sort of charge of selfishness can be levelled at almost everyone as is levelled at cryonicists. But that doesn't make it wrong.
You can, indeed, characterize what cryonicists or billionaires are doing as wanting something that everyone will have in the future. But the complaint (or at least a complaint) isn't that no one should ever have it, but that for one reason or another trying to have it now imposes a substantial cost on other people.
(Note 1. The following -- highly fanciful -- analogy may help. Suppose that instead of trying to live a 24th-century lifestyle in the 21st century, you're trying to life an affluent US / Western European lifestyle in some desperately poor part of Africa. To this end you build factories and roads and things -- but only for your use, the locals get no benefit. The factories are staffed with imported minions and supply only you, no one else is allowed on your roads, etc. It doesn't seem unreasonable for your neighbours to resent you, not simply for living like an affluent Westerner, but for imposing noise and pollution and nuisance on them in order to do so, with no compensating benefit to them.)
(Note 2. The above analysis is not concerned with the question of whether the criticisms are correct. One could argue, e.g., that the billionaires' influence on public policy is actually beneficial, that every cryonics customer benefits everyone by encouraging funding for research in relevant areas, etc. I'm concerned only with what the criticisms actually are.)
That's an interesting idea, but I'm not sure it works well historically. At which point the lifestyle of hoi polloi started to resemble that of a Roman senator? Does today's middle class live like medieval feudal lords?
In the sci-fi from the 50s. Contemporary sci-fi is rather more dystopian (with some exceptions, notably Iain Bank's Culture).
The '60s? Widespread literacy (and cheap paperbacks), indoor plumbing, private gardens, the dinner party or similar gathering as a social occasion.
In some interesting ways. There are similarities in diet, and to a certain extent occupation. Of course there are also large differences.
That's one hell of an assumption.
Disagree. I think the data support exponential global economic growth as a reasonable assumption.
http://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG
In fact, given the historical evidence, I think a very compelling argument is required in order to NOT assume exponential global economic growth.
I think calling "today's billionaires the early adopters of future living standards" does incorporate two debatable assumptions, but they're implicit assumptions rather than the explicit one CellBioGuy called out. The implicit assumptions are that (1) future economic growth translates into real income growth throughout the income distribution, and (2) being rich in the future will actually allow one to mimic the lifestyle of today's billionaires.
(1) could be false if economic growth just ends up being captured by e.g. the top 10% of the income distribution. If so, decades from now most people's living standards might remain below those of today's billionaires.
(2) could be false if components of current billionaires' lifestyles become difficult to buy in the future. For instance, a billionaire today can buy pretty well any dwelling they like. But even in the future I doubt most people at the 20th percentile of the income distribution could do the same.
Having slaves two centuries ago was accepted, but having them now allegedly makes you a morally bad person. I don't think that's what you were referring to, though.
Are you talking about the Bill Gates-rich or the Paris Hilton-rich? They are both admired and hated for totally different reasons.
I posted this last week but was too late to get any responses, so I'm reposting:
I want to change which charities I donate to, and am looking for transparent, accountable, secular (or at least non-evangelical) Canadian charities that promote democracy, social reform, infrastructure building, rationality, humanism, education, scientific progress, similar principles. Any suggestions for charities worth investigating, or at least a group/organization/website that can help me find what I'm looking for? In the past I haven't properly researched this sort of thing so I'm short on resources.
So far I've taken a closer look at Oxfam and Doctors Without Borders
(I might remind one of you that disagreement isn't grounds for downvoting all my posts)
Amnesty, UNICEF, Bill and Linda Gates Foundation, as far as mainstream charities go. I believe they all have specific Canadian divisions if you are worried about tax reasons.
Some others you might check out are Canadian Centre for Policy Alternatives, Canada Without Poverty, Equiterre, Canadian Council For International Cooperation, Tides Canada, CoDevelopment. I had a longer list but misplaced it.
I also strongly suggest you research each charity on your own instead of depending on whether or not a ranking website tells you it is good.
Givewell already analyzes the effectiveness of various charities, so I'd start there.
I don't agree with their emphasis on direct cash transfers. It reminds me of the Canadian Revenue Agency's statement that "preventing poverty is not charity, only relieving it." Givewell has always struck me as being more concerned about balancing one's karma than actually causing lasting improvement anywhere. That's just my perception, though.
It's really hard to measure lasting improvements, which does bias the choice of interventions Givewell considers, but they endorse direct transfers because it has been shown to be more effective at lasting improvements than other things they've considered.
Lasting improvements for whom? Measured how?
Not addressing the underlying issues means there is no stem to the "supply" (so to speak) of people requiring charitable help.
While Givewell does recommend one charity that focuses on direct cash transfer not every charity recommended by Givewell does and Givewell analyses Charities in detail, so even if you don't agree with their conclusion reading their analysis of a particular charity can help you evaluate the charity.
There are domains where it's easy to perform experiments (physics, chemistry) and others where it's unfeasible (biology, economy) or impossible (psychology).
The quality of scientific understanding in these different domains is necessarily different. Has there been any thoughts or study devoted to the subject of doing statistics or Bayesian learning where you can suffer from lack of feedback or hysteresis? Is there a mathematics for doing science in low feedback domains?
The whole point of Bayesianism is that you get as much information as possible from a small amount of data. It works perfectly well in noisy domains.
The recent post on Knightian Uncertainty may or may not be relevant to your interests - it's not the same thing but it seems like it might be related to what you were really getting at.
In what universe can't you do experiments about biology or psychology?
Surely you can do some tests or experiments, but the staple of physics-like experiments, having identically prepared systems, is partially or totally lacking.
It's only unfortunate that the poorly qualified first paragraph has totally hidden the real question, which was at the end of the second paragraph. I guess I'll have to wait for the next open thread...
I just realized that MrMind probably meant evolutionary biology and evolutionary psychology, which makes waayy more sense.
Guess I should've stayed home today.
Well you can't do things that would be really nice to do in biology like "rerun the tape of life", not to mention the tremendously interdependent system that is any living organism. And the artificial laboratory conditions of psychological experiments, along with variability of subjects, form a huge impediment to study.
It's not that you can't do experiments, but it's much more difficult to isolate parts of biological, economic and psychological systems and experiment usefully on them.
You can't do things in cosmology like "rerun the tape of the universe" either.
Sure, but I think it's reasonable to say that humans are ill-behaved as experimental subjects compared to other biological organisms, which are ill-behaved compared to particles, stars and galaxies.
I mean ill-behaved in the sense that their behaviors cannot be reliably modeled by compact mathematical models.
Can someone explain/articulate why rent-seeking behavior is bad for an economy? Why an economy should be structured to use people's natural greed to motivate them to create wealth.
In particular, I'm thinking about domain sharking. Ie. people who buy domain names in order to sell them rather than use it to develop a site. When the sale happens, money is just moved from the buyer to the seller with no wealth being created. In fact, the net effect is negative because it deters people from starting websites.
This article talks about rent-seeking. However, after reading it a few times, I still feel like there are some key conceptual gaps in my understanding that I'm hoping will be filled if I hear a different explanation.
The point of the economy is to produce value (where "value" is defined as "something people want"). Accordingly you want to structure the economy in such a way as to incentivize people to produce value. Rent-seeking produces no value and, in fact, lessens the incentives for others to produce value.
Nothing else works as well.
Thanks, I really appreciate the explanations! I'm a bit confused about a few things though. I have a vague understanding, but don't truly understand them.
1) How exactly does, say, operating a restaurant produce value?
2) Why doesn't rent-seeking produce value?
Let's say you are running a restaurant with, say, 200 customers per day on the average. This means that 200 people have voluntarily decided that the food and service your restaurant provides are worth more than the amount of dollars you have charged them. Your restaurant has produced value that is greater than your revenue. If your revenue is covering your costs (which are the value you use up in the process of running your restaurant) then you are generating positive value.
Rent-seeking is basically about redistribution of value between different people (or organizations). To take a very simple example, imagine a medieval peasant clearing a new field in the forest and growing crops there. He produces a certain amount of value. After a few years the local feudal lord learns of the new field and tells the peasant that the field is on the lord's land and so the peasant shall pay a quarter of his crop to the lord. The amount of value produced is still exactly the same, but before 100% of it went to the peasant and now 75% goes to the peasant and 25% goes to the rent-collecting lord.
Hm, ok. I get the idea that voluntary exchange implies that the each party thinks they're receiving something of equal or greater value in return.
I'm still a bit confused about when you can say that value is being redistributed. For example, what the lord did contributes to the precedent that land ownership is enforced, which probably has some value. How do you know when no value is being created? Are there any well accepted examples?
What do you think of the domain sharking example? I sense that it involves value destruction (rather than creation). Instead of an entrepreneur being able to buy a domain name, he has to pay a domain shark more money for it. Thus transferring money from the entrepreneur to the domain shark, discouraging entrepreneurship and wasting the domain shark's labor.
Well, "value" is a very baseline concept. You can create value out of thin air -- e.g. by singing (you can create negative value the same way :-D). For economists this concept of value is too encompassing -- in most cases they prefer to talk about good and services, even better, tradeable goods and services. With these, things are easier: you can gauge their value when they trade and measure it in money. There are certain downfalls here, of course (ask an environmentalist :-/) but for most basic economics you can assume that the value of something is how much dollars/euros/gold coins/sheep you can sell it for.
So with our peasant, let's say he was able to grow 10 bushels of rye on his forest field and their value was 100 thalers -- so the "revenue" from this economic activity was 100 thalers and all of it went to the peasant. Afterwards the "revenue" was still 100 thalers, but 75 of them went to the peasant and 25 -- to the lord. Note how the total sum didn't change -- this means we're talking about pure redistribution here.
Value destruction is also possible (you can usually treat it just as negative value). Let's say that the cost of maintaining and working this field is 80 thalers. While the peasant was working it for himself, his "profit" was 20 thalers and that made him happy. Now, after the lord showed up, he gets only 75 thalers but spends 80 -- he's in the red! So he abandons the field and the forest swallows it back. Our lord just destroyed some value with his rent-seeking.
You ask who benefits not from the trade, but from the agent's existence. Let's reason counterfactually: what would the market look like if the restaurant were removed from the equation?
If the restaurant is profitable and popular, then customers will feel sad over its absence.
If the restaurant is unprofitable and unpopular, then removing the restaurant won't make much difference (and free up resources to be allocated elsewhere).
If the restaurant redistributes storebought merchandise where it otherwise wouldn't be sold, then customers will feel sad over its absence.
If the restaurant simply clears out everything from the Walmart next door, repackages it, and jacks the price, then customers will feel glad at its removal. Good riddance!
Domain Sharking is an example of Case 4. If Domain Sharks ceased to exist, then its customers would buy the website themselves at lower prices and feel glad. Domain Sharks can exist because each URL is unique by design. So if you want a specific URL, you must go through the Domain Shark. I.e. it's a monopoly void of any positive externalities.
Contrast Domain Sharks with Stock Brokers (for lack of a better example). Stock Brokers give you personal advice for a fee. But you also have the choice of forgoing the advice for a smaller fee by trading online. Brokers in this case offer a useful service by buying stocks in bulk and dispensing advice. And if traders don't like that, an alternative exists.
P.S. After reflection, maybe this is where you're getting confused. You understand that value is created when you make stuff people want. But then you look at Domain Sharks and think "People are buying from Domain Sharks. Domain Sharks have stuff people want. Therefore, Domain Sharks must be valuable." The issue is the Domain Shark is a barrier to what people want rather than a portal, but somehow profit from what looks like (if you squint) twiddling their thumbs.
P.S.S. I believe the canonical example is Digital Rights Management. Why go through record labels when artists can upload?
Thanks! The idea to reason counter-factually is very helpful.
1) So is real-estate rent-seeking? How is it different from domain sharking?
2) What allows someone to extract rent? The only things I could think of are a) governmental power, b) ownership rights, and maybe c) physical force.
3) Are there degrees of rent-seeking? Like what if a domain shark is using the domain to host a site that has tetris, and a few of his friends play it? Is that still technically rent-seeking? What if 100 people played it? 1,000? 1,000,000? Is rent-seeking relative to what it could have been (like would it be rent-seeking if the domain could have been used by google or something?)?
This is my understanding of rent-seeking. If you wouldn't mind, I'd appreciate any comments/elaborations. No worries at all if you don't want to.
Ever wondered where the 'rent' in rent-seeking came from? Yup, land ownership is pretty much the TropeNamer of rent seeking. Of course, land is quite special in that land-scarcity (and thus, land rent) is unavoidable, even under the most relaxed zoning rules. Still, the basic feature distinguishing unearned rent from earned returns is there: land rent goes to a private landowner, not to the surrounding society and general public that actually gave land its high value.
Rent-seeking doesn't produce value by definition. This means that people will argue over what actually counts as rent-seeking. I am not going to define rent-seeking, but a similar activity is fighting over the dividing a fixed pie. Dividing the pie is a zero-sum game. Effort expended towards changing the outcome does not grow the pie. If that effort could have been spent on something productive, it should be thought of as destroying value.
A economy isn't structured with a purpose to use something. Economies self organize. The structure themselves and people try to provide services for prices that customer want to pay.
This means that anybody in the system who sees an inefficiency can make money by eliminating that inefficiency. In hierarchical organisations usually only people at the top can redirect resources and the people at the top often don't have all the information.
In our system we have a plurality. We solve some problems via hierarchical organisations and others via markets. In cases of big corporations that have their own hierarchies there a mix.
Interesting points about hierarchical organizations.
I'm confused about why you say that "A economy isn't structured with a purpose to use something. Economies self organize". Doesn't private ownership and other features of capitalism make it possible for this self organization?
Looking at the features of a real world system and analysing those features is different than something being structured for a purpose.