My initial reaction to reading the plan was "Ugh, this guy is going to hold of on revealing which theory is correct until the fourth post and force us to waste time reading a bunch of false theories?! Meh, can't be bothered, I'll just skip all of them."
You might want to work on that.
I'll be covering four theories. Two of them are generally consistent with the neuroscience and two contradict it. The former are externalist theories of motivation and the latter two are internalist theories of motivation. Externalists claim that beliefs about morality are not sufficient for motivation to act. Internalists claim that they are. This is an important distinction in metaethics.
I could begin with the conclusion and work backwards to support it. "Motivational extrernalism is obviously correct, due to neuroscientific evidence." Then go on to say what externalism is, that the theories are, and how the neuroscience supports it. Besides leaving out a discussion of internalism, the structure of those posts would be the same the next three posts I have planned. The only other difference (so far as I can see) is the framing effect used. By starting with the conclusion, It seems like I'm cutting to the chase when I'm actually just cutting out some important information about motivational internalism.
And that information is important because some LW-ers are internalists. For my purpose, it's also necessary to show that these competing theories are inconsistent with the neuroscience. That requires me to explicate the theories in some detail.
Depending on what your interests are, this series of posts may not be for you. If you're not interested in neuroscience and metaethics, this probably won't be worth your time. Alternatively, if you are interested, I think reading these posts will be much more efficient then doing all the research I had to do to write these posts.
EDIT: However, if you (or anyone else) has a better way of structuring the information, I'd seriously consider it. By your post getting upvoted, I think I have to be somewhat less confidence that my current approach is optimal.
Oh, ok. I assumed there were one obviously true theory and a bunch of obviously false ones due to that being the case for pretty much every philosophical question.
force us to waste time reading a bunch of false theories?
As long as we aren't informed which are false, it's a good test of our intuitions on these matters. I think it's worthwhile.
Since you asked for feedback:
It seems that I need to read Luke's huge article before even starting on this sequence. I'd wanted to read it anyway, but you will have more readers if you can lower the entrance requirements. (Of course, that may be impossible.)
As far as your footnotes just refer to references available online, why not use hyperlinks? "(Adapted from Schroeder et al., 2010)" is nicer than "[1]".
While your subject is interesting, this post mostly consist of referrals to some theories I don't really know. Some examples/"shiny stories" may be nice; or you may want to merge the first two posts next time.
Anyway, this seems like it will be interesting, so thanks for writing it!
I'm trying to keep the entrance requirements reasonably low so that someone without a background in neuroscience and philosophy can still easily understand. But I most likely won't be able to achieve that. There's a lot of technical language, especially on the neuroscience part. Some of it I don't fully understand. There's much that I am still learning. From my experience, understanding the applications of neuroscience (such as that in Luke's "Crash Course Human Motivation") gets a lot easier with understanding the fundamentals. It lessens the "hazzy fog" experience.
I'm seriously considering your second suggestion. If it makes it easier for y'all to read, I'm happy to make the switch. I personally like the footnotes. But I'm writing this to help others, not for my pleasure.
I have loads of examples and stories for the neuroscience part. This post was just to give a lay of the land, so you can see where I'm headed. Whenever possible, I try to break down my work into smaller, more manageable parts. I've tried writing long essays before, and they just become a time sink.
You're quite welcome. Thanks for the feedback!
I'm glad you're doing this, but remember that every post in a sequence must ccontain new insight or solve a problem or otherwise make the reader think "Huh. I sure am glad i read that post!"
Oh, okay! That's the missing criteria. I had asked below whether these series of posts is "sequence-worthy." Something seemed to be missing. Now I know what it is. Thank you for the clarification.
With that in mind, I think it'd be best to consolidate everything into one post. Apologies for the faux pas.
For an explanation of the flaws, see Churchland (1981).
I think it would be better to embed a link to the relevant paper directly in the article body, in addition to listing it in the footnotes. Currently, if I wanted to see the paper, I'd have to scroll down to the footnotes, then scroll down to the references, and only then can I see the link. Computers were created precisely to take the drudgery out of tasks like these !
Thanks for the feedback! I agree with you. I'm making the change in the updated version.
A question for more experienced LW-ers. Is it appropriate for me to tie all of these posts on moral motivation into a sequence? I'm not quite sure what is considered "sequence-worthy." The sequence page says this:
A sequence is a series of multiple posts on Less Wrong on the same topic, to coherently and fully explore a particular thesis.
I think my posts fall under that category. But they don't fit the prototype of a sequence, a la Mysterious Answers to Mysterious Questions.
Salient characteristics of "Sequence" are multiple posts, and later builds on previous to the extent that they cannot stand alone. These posts don't look like they'll stand on their own (ie, the conclusion is mostly in the 4th post ) so yes, they should be a sequence.
By the upvotes your comment is getting, I infer that people agree with you. Thank you for clarification.
Edit: Oops, I made a mistake. Sorry about that. I'm going to consolidate the parts into one post.
Imagine you are walking down a Los Angeles street when you see a homeless man. He's sitting outside a coffee shop and begging for food. You stop to give him a sympathetic look. Quickly, you enter the shop to buy a muffin and juice. You then give him the food, which he happily takes. After wishing him well, you continue on walking. By many accounts, you have just done a morally good action.1
In real life, you may never have given food to a starving man. However, it is likely that you have done other morally good actions.2 Maybe you have done volunteer work in your community. Maybe you have donated to charity. Think of a time you did a morally good action. Got it? Good.
How can we explain why you committed this moral action? Or more broadly, how can we explain why we act at all? One popular explanation of action comes from folk psychology. As Luke summarizes, "[f]olk psychology posits that we humans have beliefs and desires, and that we are motivated to do what we believe will fulfill our desires."3 While the folk psychology model is useful for daily life, it has several grave flaws.4
In response to these flaws, economists have refined and quantified folk psychology into neoclassical economics. These economic models are more useful than folk psychology for explaining and predicting human action. However, as Luke points out, they still aren't perfect. In "A Crash Course in the Neuroscience of Human Motivation," Luke describes the challenges these models face. Moreover, he details how the neoclassical model can be further improved and reduced through the insights of modern neuroscience.
Nevertheless, that reduction only covers amoral actions. We're still left with the question at the beginning of this post: what is the explanation for our moral actions? What causal roles (if any) do our beliefs, desires, and feelings play? This sequence uses neuroscience to shed light on those questions. It will do so over the course of four posts.
This is the first post, which serves as an introduction. It has introduced the driving question behind the sequence. The remainder outlines the next three posts.
The second post gives an overview of the mainstream philosophical accounts of action. It uses folk psychological terms such as "motivation," as philosophy largely relies on folk psychology. The post delves into the externalist - internalist controversy.
The third post covers the specific philosophical accounts of moral action. It discusses the four mainstream views: instrumentalism, cognitivism, sentimentalism, and personalism.5 Comparisons between the views show where they agree and disagree. The views are also described with reference to the controvers discussed in the second post. Last, this post details the different experiences we should anticipate about the brain by accepting one view over the others.
The fourth post concludes the sequence by comparing the anticipated facts of each philosophical view with how the brain actually works. Some views are more consistent with the neuroscience than others are. Thus, those that are consistent are more likely to be correct.6 And though not yet falsified, those views that contradict current neuroscience have large obstacles to overcome. These challenges should be noted when painting the larger metaethical picture.7
Notes
[1] Example adapted from Schroeder (2010).
[2] In this post, I write as though moral realism is correct. I did this for ease of explanation, not because I necessarily agree. This sequence does not depend on the reader holding any particular metaethical view about the existence of moral facts. I encourage moral anti-realists/irrealists to read "morally good actions" as "actions which some accounts consider moral," or something similar.
[3] Quoted from "A Crash Course in the Neuroscience of Human Motivation."
[4] For an explanation of the flaws, see Churchland (1981).
[5] In this context, "cognitivism" doesn't refer to the metaethical view that moral language expresses truth-apt propositions.
[6] As per Bayes' rule, P(H|E) ∝ P(E|H).
[7] Credit for the sequence idea goes to Luke. This is one of his 11 LessWrong articles he'll probably never have time to write. I hope it helps close some inferential gaps for his metaethics sequence. Conversely, any errors in this sequence fall squarely on me. Correction, advice, criticism are encouraged. (Especially on my writing and citation style.)
References
Churchland (1981). Eliminative materialism and the propositional attitudes. The Journal of Philosophy, 78: 67-90.
Schroeder, Roskies, Nichols (2010). Moral motivation. In Doris (ed.), The Moral Psychology Handbook (pp. 72-110). Oxford University Press.