I see in the "Recent on Rationality Blogs" panel an article entitled "Why EA is new and obvious". I'll take that as a prompt to list my three philosophical complaints abouts EA:
The best strategy to produce ethical behavior is simply to appeal to self-interest
This is only true of ethical behaviours that can be produced by appealing to self-interest. That might not be all of them. I don't see how you can claim to know that the best strategies are all in this category without actually doing the relevant cost-benefit calculations.
the relevant cost-benefit calculations.
My claim is based on historical analysis. Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.
Moral exhortation, it must be noted, also has a hideous dark side, in that it delineates a ingroup/outgroup distinction between those who accept the exhortation and those who reject it, and that distinction is commonly used to justify violence and genocide. Judaism, Christianity and Islam are all based on moral exhortation and were all used in history to justify atrocities against the infidel outgroup. The same is true of communism. Hitler spent a lot of time on his version of moral exhortation. The French revolutionaries had an inspiring creed of "liberty, equality and fraternity" and then used that creed to justify astonishing bloodshed first within France and then throughout Europe.
I find your list of historical examples less than perfectly convincing. The single biggest success story there is probably science, but (as ChristianKl has also pointed out) science is not at all "based on aligning individual self-interest with the interests of the society as a whole"; if you asked a hundred practising scientists and a hundred eminent philosophers of science to list twenty things each that science is "based on" I doubt anything like that would appear in any of the lists.
(Nor, for that matter, is science based on pursuing the interests of others at the cost of one's own self-interest. What you wrote is orthogonal to the truth rather than opposite.)
I do agree that when self-interest can be made to lead to good things for everyone it's very nice, and I don't dispute your characterization of capitalism, criminal justice, and democracy as falling nicely in line with that. But it's a big leap from "there are some big examples where aligning people's self-interest with the common good worked out well" to "a good moral system should never appeal to anything other than self-interest".
Yes, moral exhortation has sometimes been used to get people to commit atrocities, but atrocities have been motivated by self-interest from time to time too. (And ... isn't your main argument against moral exhortation that it's ineffective? If it turns out to be a more effective way to get people to commit atrocities than appealing to self-interest is, doesn't that undermine that main argument?)
The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.
But it's a big leap from "there are some big examples where aligning people's self-interest with the common good worked out well" to "a good moral system should never appeal to anything other than self-interest".
The claim is not so much that moral appeals should never be used, but that they should only happen when strictly necessary, once incentives have been aligned to the greatest possible extent. Promoting efficient giving is an excellent example, but moral appeals are of course also relevant on the very small scale. Effective altruists are in fact very good at using self-interest as a lever for positive social change, whenever possible - this is the underlying rationale for the 'earning to give' idea, as well as for the attention paid to extreme poverty in undeveloped countries.
The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.
Scientists generally do trust scientific papers to not lie about the results they report.
Even an organisations like the FDA frequently gives companies the presumption of correct data reporting as demonstrated well in the Ranbaxy case.
I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a little farther.
His favorite example is Federal Express. Of course in a business like Federal Express self-interest incentives are the biggest driver of performance.
That doesn't mean that they are the biggest driver in a project like Wikipedia.
Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.
What does science have to do with self-interest? Making one's claims in a way that they can get falsified by others isn't normally in people self-interest.
Science appeals to sacred values of truth to prevent people from publishing data based on fake data. If it wouldn't do so and people would fake data whenever it would be in their self-interest the scientific system wouldn't get anywhere.
There may be an ethically relevant distinction between a rule that tells you to avoid being the cause of bad things, and a rule that says you should cause good things to happen. However, I am not convinced that causality is relevant to this distinction. As far as I can tell, these two concepts are both about causality. We may be using words differently, do you think you could explain why you think this distinction is about causality?
In my understanding, consequentialism doesn't accept a moral distinction between sins of omission and sins of action. If a person dies whom I could have saved through some course of action, I'm just as guilty as I would be if I murdered the person. In my view, there must be a distinction between murder (=causing a death) and failure to prevent a death.
If you want to be more formal, here's a good rule. Given a death, would the death still have a occurred in a counterfactual world where the potentially-guilty person did not exist? If the answer is yes, the person is innocent. Since lots of poor people would still be dying if I didn't exist, I'm thereby exonerated of their death (phew). I still feel bad about eating meat, though.
I also believe in locality as an ontologically primitive moral issue.
Have you read Scott Alexander's piece on Newtonian ethics?
If we look at this issue from an angle "ethics is memetic system evolved by cultural group selection", then I guess it makes sense that (1) systems promoting helping your cultural group would have an advantage over systems promoting helping everyone to the same degree, and (2) systems that allow to achieve the "ethical enough" state reasonably fast would have an advantage over systems where no one can realistically become "ethical enough".
The problem appears when someone tries to do an extrapolation of that concept.
I am not sure how to answer the question "should we extrapolate our ethical concepts?". Because "should" itself is within the domain of ethics, and the question is precisely about whether that "should" should also be extrapolated.
I won't talk about your first two points - I kind of agree (but I'm an anti-realist and think you're a bit strong in your beliefs). I'd like to hear more about
I do not believe a good ethical system should rely on moral exhortation,
I don't get it. ethical systems exist in people regardless of transmission and enforcement mechanism. Put another way, what mechanism would you add to EA that would make it better? EA + force doesn't seem an improvement. EA + rejection of heretics likewise seems a limitation rather than an improvement.
I'd also like to point out that "the free rider problem" isn't fundamental. My preference for solving it is to be so productive that I just don't care if someone is riding free - as long as they and I are happy, it's all good.
Ignoring the free rider problem until we get the holodeck doesn't seem to be a serious solution. If you have a deadbeat brother sponging off you it is all well and good to think that one day you'll win the lottery and you won't care. That only works with your own money though. DB is talking about a system that you are trying to get other people to buy into. They won't do that if your system is transparently rob-able. They'll rob it instead. People aren't dumb. Give em the choice of pulling the cart or sitting on it and you pull alone.
I mostly meant that "free rider" isn't a problem in altruism (where you pay for things if you think it improves the world), only in capitalist financing (where you pay only for things where you expect to capture more value than your costs).
ALL recipients of charity and social support are free riders: they're taking more value than they're contributing. And I don't care, and neither should you. Calling them "deadbeats" implies you know and can judge WHY they're in need of help, and you are comparing deservedness rather than effectiveness. I recommend not doing that; deciding what people deserve pretty much cannot be done rationally.
I mostly meant that "free rider" isn't a problem in altruism
Actually, it is. The problem with 'free riding' is not that it's somehow unfair to the people who are picking up the slack, it's that it distorts behavior. You don't want to give money to beggars if this just incents more people to beg and begging is a horrible job - and this is true even if you're altruistic towards people who might beg. You'll need to find a way to give money that doesn't have these bad consequences, even if that means expending some resources.
I was going for a specific common situation, a family member who is mooching. I didn't mean that all recipients of charity are deadbeats. Obviously, that's going to depend on the individuals in question.
This is a "teach/give fish" issue here. If you give people stuff they don't scarequotes earn unscarequotes then they have, in a way, earned it. I mean, value judgement aside they have it now, right? They were miserable enough in front of you that they got it off you. Good on em. Mad beggar skills.
But that's just on a personal level. If you expand that, and you aren't just a dude who is a soft touch, but actually build an organization on the principle of "see cry, give hanky", then your charity is vulnerable to a free rider attack. You gotta fix that, if you actually want to do good and not just create a client group.
If you've ever seen the situation of "it would actually be bad for me to get a job because I'd lose X benefit" you get what I'm talking about here. It is a real problem, and the fact that it takes a hard heart to look at it doesn't make it less real. You have to solve the free rider problem if you want to do charity well, like you have to solve the impostor problem if you want to do encryption.
Even if you believe that locality matters, EA principles like room-for-funding or focus on effectiveness instead of exerting effort for interventions still apply.
The best strategy to produce ethical behavior is simply to appeal to self-interest
That depends largely on your audience. For some people self-interest is very important. For other people fairness is more important. It's a mistake to generalize too much from one example in that regard.
Clare Graves's developmental theory for example groups people in different stages whereby different stages are differently motivated.
This is a selection of papers put out by DeepMind in just the first half of this year.
One-shot learning with Memory Augmented Neural Networks:
By using a specialized memory node, a network is trained which can learn to recognize arbitrary new symbols after just a handful of examples. This by itself is landmark. "One-shot learning" is one of the holy grails on the path to GAI.
Continuous Control with Deep Reinforcement Learning:
Extension of Q-learning to continuous state spaces.
Unifying Count-Based Exploration and Intrinsic Motivation:
Q-learning modified to include an incentive for exploration allows greater progress on the previously intractable Montezuma's Revenge Atari game.
Asynchronous Methods for Deep Reinforcement Learning:
Actor-critic architectures yield improved performance over previous Q-learning architecture.
Learning gradient descent by gradient descent:
An LSTM network is used to learn the "learning algorithm" rather than using gradient descent or some other default algorithm; obtains what appears to be remarkably superior performance.
I'm not even going to bother linking the Go and general Atari papers. And the big Atari one was last year, anyway.
I'm getting a little bit concerned, folks.
One person on IRC asked me if this is what the Singularity might look like from inside. I asked them, 'if this wasn't, how would the world look different?' Neither they nor I knew.
Maybe people would point to the AI solving real world problems instead of just various more academic results?
First, they are solving real world problems. But as usual, companies talk a lot more about the research than the trade secrets. Google uses it heavily, even for the crown jewel of Search. The Deepmind post yesterday mentions DQN is being used for recommender systems internally; I had never seen that mentioned anywhere before and I don't know how DQN would even work for that (if you treat every eg YT video as a different 'action' whose Q-value is being estimated, that can't possibly scale; but I'm not sure how else recommending a particular video could be encoded into the DQN architecture). Google Translate will be, is, or has already been rolling out the encoder-decoder RNN framework delivering way better translation quality (media reports and mentions in talks make it hard for me to figure out what exactly). The TensorFlow promotional materials mention in passing that TF and trained models are being used by something like 500 non-research groups inside Google (what for? of course, they don't say). Google is already rolling out deep learning as a cloud service in beta to make better use of all their existing infrastructure like TPU. Deepmind recently managed to optimize Google's already hyper-optimized data centers to reduce cooling electricity consumption by 40% (!) but we're still waiting on the paper to be published to see the details. The recent Facebook piece quotes them as saying that FB considers their two AI labs to have already paid for themselves many times over (how?); their puff piece blog about their text framework implies that it's being used all over Facebook in a myriad of ways (which don't get explained). Baidu is using their RNN work for voice recognition on smartphones in the Chinese market, apparently with a lot of success; given the language barrier there and Baidu's similarly comprehensive aspect as Google and Facebook, they are doubtless using NNs for many other things. Tesla's already (somewhat recklessly) rolling out self-driving cars; powered by Mobileye doesn't use a pure end-to-end CNN framework like Geohotz and some others, but they do acknowledge using NNs in their pipelines and are actively producing NN research. People involved say that companies are spending staggering sums.
Second, in the initial stages of a Singularity, why would you expect a systematic bias towards all the initial results being reported as deployed commercial services with no known academic precedents? I would expect it the other way around: even when corporations do groundbreaking R&D, it's more typical for it to be published first and then start having real-world effects. (eg Bell Labs - things like Unix were written about long before AT&T started selling it commercially.)
I am posting here to karma-whore, so that I may one day allowed to actually contribute to the community!
I run H+Pedia the wiki of transhumanism and futurism which has been a great ongoing project to map out everything from emerging technology developments through to political and ideological advocacy in this space.
I'm also a cybercrime researcher, so my life is rather cyberpunk.
https://www.newscientist.com/article/dn25458-blood-of-worlds-oldest-woman-hints-at-limits-of-life
Possible clues about limits of lifespan but also how to possibly get around those limits and how people might stay healthier longer.
In van Andel-Schipper’s case, it seemed that in the twilight of her life, about two-thirds of the white blood cells remaining in her body at death originated from just two stem cells, implying that most or all of the blood stem cells she started life with had already burned out and died.
Fascinating. I'd be interested in more measurements of the population structure of hematopoietic stem cells with age. It would be integrating to see if this is a purely stochastic process of lineage loss and expansion with time, or if particular lineages reliably grow faster and crowd out the competition.
Overcoming Eager Evidence
Does anyone know any good way to make a point that one believes is true on its own merits but clearly benefits the speaker or is easier for the speaker?
Suppose a poor person is saying we should all give more money to poor people, are there ways to mitigate the effect of “You're only saying that to benefit yourself” beyond either finding someone else without that perceived (and likely actual, but maybe less than perceived) bias or just taking the hit and having a strong enough case to overwhelm that factor?
I've noticed that my System 1 automatically discounts arguments made for points that benefit the speaker even more when the speaker sounds either prideful, or like they're trying to grab status that isn't due to them, than when the speaker sounds humble.
I've also noticed that my System 1 has stopped liking the idea of donating to certain areas of EA quite as much after people who exclusively champion those causes have somehow been abrasive during a conversation I've listened to.
Well, provide enough evidence/arguments so that the point stands on its own merit. The general stance is "I'm not asking you to trust me, look at the evidence yourself".
Yeah, for want of a specific book counter that's what I figured. But I figured if there WERE a book method to bypass that this is the community that would know, and it'd be worth knowing. Thanks anyway.
The standard "book counter" would be to point out that the objection is a fallacious argumentum ad hominem. However, unless you are in a formal or quasi-formal debate situation or addressing an academic audience, Lumifer's suggested approach is preferable, IMO.
ETA: I wonder why this was downvoted; it seems like a non-controversial comment that is relevant to the topic.
You're quite right of course. I'll probably do both, point out the invalid argument AND have a rock solid argument of my own. Thank you for your input.
There are two ways to read "good way".
The first is norm of rational discussion. In those norms people who make statements where they have conflicts of interests disclose those conflicts of interests.
The second way to read "good" is to look at persuasive power. There are various rhetorical stratagies to use to be more persuasive.
That's a good point; sorry for the ambiguity.
I believe my point to be correct and want myself and my interlocutor to agree on the correct answer. Therefore I want both: If we both reach a truth that is not my prior belief, that's a win, and if I get my interlocutor to agree with a true point that's a win. If I'm right and fail to get agreement that is a loss, and if I am wrong and get agreement, that is a greater loss.
So basically: I'm greedy. Answers to both questions please :)
From the rhetorical side, you can sometimes gain an edge by starting with a leading question or with stating a problem. "I recently found myself in the unusual position of having some money to spare; so I asked myself, where can this money do the most good?"
Your audience may have any number of answers, but you've started by framing the matter in a favorable way (not "can I spare the money", but "when I have money to spare", and not "talking about economics" but "talking about morality"). This has the added advantage (or disadvantage) of encouraging alternate solutions... Someone in your audience might make a good argument for AI research, perhaps even convincing you to change your mind :-)
This should be applicable to most arguments: riding bikes ("When we're looking for ways to be more healthy..."); veganism ("If we are looking for ways to reduce our ecological impact..."); protectionism ("How can we keep Americans in their current jobs?").
Sliding just a bit more to the dark side, try stating another possibility, preferably one that you suspect that your audience has already heard of and is suspicious of, and then giving good reasons against it. Of course, this requires that you know your audience well enough.
There are many interesting anecdotal reports on beta-blockers for social anxiety. The best review of the topic appears to be very old, 1999 specifically. It is not systematic, and cites just 2 pieces of evidence both of small sample size (but better than web anecdotes for generalisation). MAOI's are more effective but they are thought to be the most effective, but come with physical risks and dependence so aren't usually prescribed anyway. The juicy piece of evidence is that flooding, according to this study is more effective than the drug used (presumably comparable to the primary alternative propanolol). And, that propanolol isn't consistently superior than placebo even. It would be most interesting to see how it fairs as an adjunct to flooding, since as some people on a pickup forum have theorised, it would make approaching easier (equivelant to flooding, really). However, a conservative interpretation with less wishful thinking is that no it's not the magic bullet.
I am really peeved that a preeminent academic and clinical neurologist specialising in difficult diagnosis gave me a schizaffective diagnosis. Its practically the worst in the book. After all I have done to improve my mental health, he still thinks I'm practically irredeemably I'll. There is no higher court of diagnostic appeal now. On the bright side, I don't have a neurological conditions other than learning disorder and he as well as other clinicians have taken clinically relevant but incorrect notes based on what I have said (I share blame) and have likely biased one another. Fucking hell, I have worked so hard to avoid this nonsense. Now I have more or less been diagnosed with every mental illness. I hope that speaks more to the oversensitivity of the tests and overdiagnosis. On the other hand, I really need to come to terms with this stuff. Sorry this wasn't a particularly useful post to share but I am really frustrated right now and quite uoset .I came to south america to finally partake in ayahuasca but getting a psychotic disorder diagnosis by email after already arriving overseas, particularly after the clinician didn't tell me in person is upsetting and indicates I shouldn't try a psychedelic here. Who will ever take me seriously as rationalist if they knew I am so fundamentally irrational...
1,500 scientists lift the lid on reproducibility
Survey sheds light on the ‘crisis’ rocking research.
Optimization Methods for Large-Scale Machine Learning
review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications.
Does anybody want 'Soft tissue sarcomas: histological diagnosis' by Artemis D. Nash (Biopsy Interpretation series, Raven Press, 1989)? I have a spare one and wish that it would be of use, for someone.
(Edit to add: since we have so many 'best source' recs, and since it has been mentioned that borrowing books from people makes the books appear more special, and since it is easier for me to just give them forever than loan, I am going to rustle my bookshelf and pick a few more presentable ones. I love sending such parcels abroad, it gives me plenty of warm fuzzies. I invite you to do the same!)
Nitpick: reading the first line of your post I was wondering "why would anyone want to have a sarcoma? Does it have some strange / interesting side-effect I wasn't aware of?". Then I understood it was about a paper.
I suggest to quote the title, as in:
Does anybody want "Soft tissue sarcomas: etc.".
University Innovation and the Professor's Privilege by Hans K. Hvide, Benjamin F. Jones
National policies take varied approaches to encouraging university-based innovation. This paper studies a natural experiment: the end of the “professor’s privilege” in Norway, where university researchers previously enjoyed full rights to their innovations. Upon the reform, Norway moved toward the typical U.S. model, where the university holds majority rights. Using comprehensive data on Norwegian workers, firms, and patents, we find a 50% decline in both entrepreneurship and patenting rates by university researchers after the reform. Quality measures for university start-ups and patents also decline. Applications to literatures on university technology transfer, innovation incentives, and taxes and entrepreneurship are considered.
A GIF of Bruce Lee saying:
"Your thoughts are WRONG!"
https://gifs.com/gif/the-way-of-the-intercepting-fist-longstreet-bruce-lee-lei-seil-loong-R6VojE
The 2010 Cochrane Review "Hydroxyzine for generalised anxiety disorder" 2011 concludes hydroxyzine is inappropriate for generalised anxiety disorder, in contrast to the SSC author´s recommendation at (http://slatestarcodex.com/2015/07/13/things-that-sometimes-work-if-you-have-anxiety/). His methodology is unconventional and unreliable. I encourage him to disclaim this advice.
The 2010 Cochrane Review "Hydroxyzine for generalised anxiety disorder" 2011 concludes hydroxyzine is inappropriate for generalised anxiety disorder,
That's false. They write: "Even though more effective than placebo, due to the high risk of bias of the included studies, the small number of studies and the overall small sample size, it is not possible to recommend hydroxyzine as a reliable first-line treatment in GAD."
They don't recommend it. They haven't found that it's inappropriate.
Inadequate evidence means its inappropriate. When it comes to medicine, an absence of evidence is evidence of absence (of appropriateness). There is an absence of evidence that hundreds of chinese herbs are useful for treating x, y and z conditions. They may be, as we sometimes find, but its inappropriate to treat conditions with them till then. Similarly, hydroxyzine in inappropriate for treating GAD.
When it comes to medicine, an absence of evidence is evidence of absence
No, in this case the studies that exist point to hydroxyzine being superior to placebo. It just that not enough high quality studies exist that this is a strong conclusion.
Years passed between Scott writing his article and the Cochrane review that might have brought additional studies.
In cases where the published evidence isn't clear it's also possible to use clinical experience to make recommendations. Scott has clinical experience.
Even if most of the effect is due to placebo, having a clear placebo to use when in high anxiety situation might be worthwhile. EFT is likely a cheaper, but simply by having the person take an action and showing agentship they might get a positive effect.
Appreciative of the broadness here, but I take trust in the readership here to recommend interestingly.
I'm looking for an introductory book on non-democratic political systems. I'd be particularly interested in a book that argues some of the core issues in democracy, and proposes alternative solutions.
I often find myself critical of democratic systems ("we shouldn't be voting, I don't trust these people"), but have little arguing power to the alternatives when needed. Often hear neoreactionary / anarchism thrown around, but I'd actually like to ready beyond a wikipedia article.
Thoughts?
Moldbug is generally the best neoreactionary to read imho. Google moldbuggery to find a site that indexes his output in a more navigatable version than the original blog.
Obviously this isn't a book, per se, but his 'gentle introduction' sized threads are novella sized at least.
See Seasteading. No good book on it yet, but one will be published in March (by Joe Quirk and LWer Patri Friedman).
Thomas Hobbes' Leviathan is a classic book that argues for monarchy against aristocracy and democracy.
What should you be doing right now if you believe that advances in AI are about to cause large-scale unemployment within the next 20 years (ignoring the issue of FAI for the sake of discussion)?
Make your money in the stock market. Time for money is a rubes game. Let your money make money.
One worry here is picking stocks that themselves only make sense in a "time for money" world. It's also maybe not obvious how to bootstrap into such a tactic without starting off with time for money yourself.
(That said, I think expecting your wages to decrease suddenly means you should save more now and invest it, given that you'll need to live off those returns later.)
I sort of thought money was a given. Like, if you don't have money now, and you think that AI is going to make it impossible to work...the question kind of answers itself, right? The economy has failed you, opt out in the manner of your choice. (disability, prison, marry money, whatever)
As far as picking stocks, I think its fair to say that the companies that sell the AI to put everyone else out of work will do well, as will those companies that use their products. Whatever flavor the AI unemployment takes ought to give you a bull period in some segments. Watch for those and ride it.
a fun little article on training a neural network to recognize images.
and an integration paper too
Towards an integration of deep learning and neuroscience
If you think about it, everything is just numbers
In my view this is one of the most serious misconceptions about the entire field of machine learning. Sure, if you zoom out far enough, everything is a number or vector or matrix, but it's rare that such a representation is the most convenient or precise one for formulating the learning problem.
Yeah, it shows how a reductionist viewpoint tends to dead end. But so hard to represent stuff in analog...
uh-oh cellfones DO cause cancer ?
"In our opinion, the exposure to RF-EMF caused the tumors seen in the male rats in the NTP study.
But proximity matters, as does duration, and level of exposure."
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.