I'll start drafting a Value of Information post. (Time to put those decision theory classes to work!)
Competition is encouraged; just send me a note / comment here if you're also drafting one so I'll get the impetus to write faster/better.
Post finished. I didn't expect it to take me a month, but am glad I finished it by that milestone, at least.
This seemed like the sort of post worth bumping and seeing what the current state of each of these were. How many of these got written and which still seem useful?
I would be especially interested in seeing The Value of Information. UnBBayes also, maybe -- I don't know how useful it would be from glancing at it.
I think I can summarize hedonomics right here: Most people spend too much time optimizing for the acquisition of more objects and not enough optimizing their use of objects they already have (where objects can be anything we want not just physical items).
ex: "If only I was better looking!"
economics: acquire more of the properties that make you attractive to other people
hedonomics: are you maximizing your looks given your current resources?
I think the distinction relies on a naive understanding of economics, but it is nonetheless a good heuristic in general to ask that sort of question.
The "Dr. Evil problem" is pretty straightfoward as anthropics goes. Equivalent situations are already discussed in various places on LW. On the other hand, I don't know if we've had a basic "here's how anthropics looks as probability theory" post. I may do that.
Could you clarify? Which posts are you referring to?
If you do write that article, I'd be very interested to read it.
So Eliezer's articles related to The Anthropic Trilemma and Boltzmann Brains basically treat the "Dr. Evil" problem as an easy introduction to harder problems. Katja Grace also has some really good articles on anthropic reasoning, some here and some on her own blog.
I'm having trouble seeing the relevance of either of those posts. Elga's article is about static self-locating belief, i.e., which of two individuals I should believe myself to currently be. Eliezer seems only to be questioning the coherence of dynamic self-locating belief, i.e., which individual I should believe to be my future self. And I'm not presently sure how the Boltzmann Brain post touches on this at all.
This is me making a public commitment to get #'s 6 or 8 done within the next 3 months, one of which will be my first post to LessWrong.
So... nobody wants to try writing the 'Informal Fallacies as Errors in Bayesian Reasoning' post?
But that paper is so badass!
I spent a few hours last night read through the material and writing down some general ideas. However, I soon felt like I was "faking it" when it came to the math. That was a red light for me to stop. I don't think I understand enough of the mathematics to explain it well. I wanted to say this asap so I don't prevent anyone else from tackling a problem they think I'm already working on.
Instead, if no one else has already written it, I may turn my attention to the post on Motivational Externalism. That also a topic I'm interested in, as well as one that is less math-intensive. Is that topic still open, Luke?
Thanks for posting this notice.
As for motivational externalism: as far as I know, nobody is developing that post. Go for it! If you need help, I'm happy to point you to the right review articles, but I probably can't help more than that. The review article I link to above is the best starting place.
You're welcome.
Thanks for the encouragement! I'm reading the review article you linked to right now. I'm also reading your advice on writing. (Must. Remember. Short. Sentences.)
I understand that your ability to help is limited. You're a very busy guy doing very important work. And I don't say that to blow smoke.
But when you have the time, I'd definitely appreciate if you could point me toward the right review articles.
I finished the review article Schroeder et al. (2010). Based on it, this is my sketch of my tentative outline:
I have many questions. But for now. I'll restrain myself to the three most important ones.
Does this outline reflect what you had in mind? I read that this topic is necessary for your Metaethics sequence. I want to save you the time of writing this yourself so you can focus on topics I can't. Division of labor and such. In order for that to work, I need to make sure that I'm targeting the specific questions you would have otherwise had to address.
How in-depth should I get with the neuroscience? I want to aim low with my explanations. Illusion of transparency, large inferential gaps, etc. But I'm not sure how low is too low. Are they any useful heuristics for approximating when I need to go less in-depth? More in-depth?
The textbook Schroeder et al. (2010) uses as their primary reference for neuroscience is at least 10 years old. Are there other resources with more recent information I should be aware of? Any specific pieces of information that would be useful?
The outline looks perfect! And yes, the main problem with that article is that it is out of date with the neuroscience. I would begin instead with Neuroscience of Human Motivation and the sources it cites, and also Neuroscience of Preference and Choice, if you can get your hands on it.
Sounds great! Thanks for the quick response.
Do you have a PDF copy of Neuroscience of Preference and Choice? If not, do you know anyone who may? Would it be appropriate for me to ask in the discussion section? I've searched online before, but I haven't been able to find a (free) electronic version.
I only have a Kindle copy. The first two chapters are the most important. You could email each author and ask for a pre-print copy of their chapter.
I have a Kindle myself, so that's not a problem. If it's not an inconvenience, I'd appreciate it if you copied the file and sent it to me at:
(REDACTED)
If you can't, I'll email the authors for a pre-print copy.
The first post of the sequence is almost complete. In order to prevent procrastination, I'm giving myself a timeline of 24 hours to finish it. If I don't post it by 3 PM EST on 2/10/12, please downvote this comment until I do.
Apologies for the delay in writing the posts. A combination of holidays, akrasia, and large inferential gaps slowed my progress. Most of my economic reading comes from the Austrian school, so reading neuroeconomics has required me to also become more literate in neoclassical economics. (A subject I've otherwise avoided because of it's ability to mind kill me.)
Edit: The first post has been completed by the deadline. As such, please do not downvote this comment. Thanks!
You're welcome!
Giving myself that arbitrary deadline and penalty did motivate me. I finished writing and editing the first post within a few hours. The penultimate copy is here:
http://www.scribd.com/doc/81121556
If anyone has any suggestions or criticisms, please let me know ASAP. I'm now in the process of formatting the post for the main site. Then, onto the second post, which is much more content heavy.
Actually, I would really love to write that! I've been looking really hard for said paper, assuming it had already been written by someone, somewhere. It is totally badass, and on a topic I'm really interested it.
To be honest, I'm just not sure if I have enough experience and information to write it, and to write it well. I'm willing to give it a shot, though. It's something important to try my hand at.
Do you (or anyone else) have any resources on hand that might be useful? Any advice? (On both the topic itself, and the writing process.)
lukeprog has done this sort of thing before, I think - but that "post" is not a post. It's a sequence!
I am trying to integrate fallacies as errors in Bayesian reasoning into the post I am writing on the principle of charity, the straw man fallacy, and the principle of humanity...it's a lot to think about, organize, and try to present coherently, and those three are a small subset of all the informal fallacies that there are.
For what I'm aiming for, I don't think a sequence is necessary. A lot of the groundwork on Bayesianism has already been laid elsewhere, so I am able to restrict my discussion to the following areas:
If I narrow my scope to these questions, I think I can give a satisfactory overview of the answers in one post. A more thorough investigation (which I perceive that you are aiming for) is valuable and very well might need its own sequence.
But for now, I'm trying to aim very low. I hope that in the future, someone writing that more comprehensive post can say:
Hey! Remember that post PP wrote on informal fallacies as errors in Bayesian reasoning? I'm going to go much more in-depth than he did. Go read his post first as a primer so I don't have to re-tread covered ground, and then come back here for a more thorough analysis.
One of the classic debates in metaethics/moral psychology is between motivation externalism and motivational internalism. This debate seems to be in the process of being resolved by neuroscience, in favor of motivational externalism.
Moral motivation for humans is an empirical question, but no universally compelling arguments = externalism for minds-in-general, no? And internalism, not "objective morality", is the precise term we should be using for the spooky thing we don't believe in.
While we're listing stuff, one thing I'd like to do at some point would be to look at the literature on philosophical training and critical thinking ability (eg. as measured by the Watson-Glaser test). http://images.austhink.com/pdf/Claudia-Alvarez-thesis.pdf looks like it's the best current starting point.
EDIT: I wound up excerpting that thesis at length: http://lesswrong.com/lw/dhe/to_learn_critical_thinking_study_critical_thinking/
Thinking Too Little or Thinking Too Much.
I'm reading the linked article, how did you find that one?
There are many Less Wrong posts I'd like to write, but I'm starting to admit there are some of them I'll probably never get around to. I need to be doing other things. If anybody wants to write up the post ideas below, go for it! You may also want to announce you're working on one or more of them in the comments, to avoid duplicate work.
In no particular order...