Zaine comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Hello there!
I think I first saw LessWrong about three years ago, as it frequently came up in discussions on KW, the forum formerly linked to the Dresden Codak comic. This makes mine one of the longer lurking periods, but I've never really felt the urge to take discussion to the actual posts being discussed and talked about them elsewhere when I felt the need to comment. All this changed when Alicorn told me that when I was asked to make a post relevant to LessWrong that meant I actually had to post it on LessWrong (a revelation which I should have probably anticipated). So it has come to this.
The simplest place to start describing myself is by saying that I'm the type of person that skims through the 200 most recent comments to see which ones are well liked before writing anything.* In real life terms, I've finished up my bachelor's degree in December, after making various errors. Unfortunately, with it finished, I have discovered that I lack motivation to pursue a standard career, since just about the only things I find myself caring about are stories, knowing the future (in the general, not the personal, respect), and understanding things, particularly things related to people. (This is probably not normal for a human, but I can't say I mind it.) Fortunately, these things are fairly similar to the things LW is interested in, so it shouldn't be a problem!
These atypical weights in my utility function do, however, leave me with opinions that I think are largely a lot "darker" than the typical poster (and I don't just write that for sexy bad-boy appeal). For example:
Well, the first two of those don't even have much to do with my personal preferences. And yet, I'm not a scary person, I promise! While maybe my utility function makes it easier for me to accept these conclusions, the overwhelming majority of my beliefs actually arose from oodles of thinking about the topics, and they are just things that I think are true, regardless of whether I want them to be true or not. That said, when the enraged zealots come for us, I'm pretty sure I'm going to be one of the first to burn at the stake! I also wish that using smiley faces was more acceptable here, since I would not mind adding an equals sign-three one to the end of that sentence to convey the intended mood a little better.
Well, this has already gone on too long already, but I hope you were not too bored. I might as well mention that at the moment, I'm trying to write a realistic post-apocalyptic novel (where the recovery has set in enough that they're ahead of the previous all-time high), and applying for a Center for Modern Rationality helper position, since I think these things are interesting, and I'd like to explore them before moving on to uninteresting survival strategies if necessary.
Bye for now, and I hope we have illuminating conversations together!
*If you're curious what I found, here are the general conclusions (although some of these are fairly low confidence):
Hello! I'd welcome you, but I can't honestly represent anything or anyone besides, well, me (I'm a complete neophyte). Really my interest quite piqued at your thoughts on Mr. Bentham's philosophy, as they happen to be the exact opposite conclusion I came to - namely that utilitarianism is essentially for people who think, "Things could be so much better if I ran things." The main logical process that led to this conclusion was: People aren't being logical < If they were logical, they would consider the probability of the net good of an act, and only act if the probability was very high, or just above normal but still low risk < What about contentious issues, based upon value systems? who would make the call on those? < _____.
On that last step I've never really made any progress, as it seems no matter how objective (I consider this word to include the consideration of emotions) and rational you are, on the contentious issues that have no... *
I suppose I made a little bit of progress there, so thank you for the kick - but you can see, I hope, how I think utilitarianism is embraced by people who think on the opposite pole of "I think being nice is good". I don't think it's embraced only by those who think they should be running things anymore, though. That changed since the beginning of the post, and since this was a bit about your thought processes in coming your conclusion, I've kept mine.
Cheers!
*The following bracketed fragment completes the thought I was going write, before I cut off the sentence and started from "... Sorry"; it was written ex post facto: [right or wrong, it comes down to the individual value system of the decider(s).]
I think your thought process brings up a few different aspects of evaluating ethical philosophies, and disentangling them would be very helpful.
First, I certainly agree that there are probably people out there that reach utilitarianism through a process of motivated cognition - they want to be in control, and the reason they use (perhaps even to themselves) to make that sound better is that it would be for the good of everyone. However, I also think that there are many other people out there who grew up believing that good is what we should strive for and that the way to do that is to aim for the greatest benefit of the greatest number of people. These types of people might then reach for utilitarianism not to justify actions they wanted to perform already, but rather as what they perceive as the closest complete ethical system to their previous objectives.
While the former group of people merely use utilitarianism as an excuse (even if they believe they believe in it), it is actually the latter group whose reasoning I am generally more concerned about. Whereas the dictator types will do what they planned to do anyway, the forces of good types are vulnerable to taking utilitarianism too seriously, and doing such things as, for example, thinking that maybe it's okay to sacrifice one human life if it will save one million ants (without considering ecosystem impacts), which I do not think is a thought that would have ever arisen from their core belief system. Which is not to say that all utilitarians would agree with that trade-off, but I have seen some who seem like they would, and that is just one minor example of the many problems I have with the idea.
The other point I wanted to bring up is that utilitarianism is really a system for general thinking, even if one likes it, not for immediate real-world implementation. Indeed, it is unimplementable, in the mechanism design sense. So the only way (that I can think of) that you could put it into practice in the real world is to have a strong AI (or equivalent) build detailed models of everyone (possibly involving brain scans) and implement a solution based on those (as otherwise, any implementation would suffer from participants refusing to tell the truth about their utility functions). So, the question of "how would contentious decisions be made?" is fairly unanswerable, except through accepting some deviation from utilitarianism.
I hope that helps crystallize your thoughts a little bit.
That said, where there exists a measurable difference between an implementable approximation of utilitarianism and an implementable approximation of some other moral principle X, then it makes sense to consider oneself a utilitarian or an Xian even if one is, as you say, accepting deviations from utilitarianism or X in order to achieve implementability.
Thank you! I'd never really thought of that other (the latter) approach to utilitarianism; that explains a lot.
Nitpick: The use of 'crystallize' in regard to 'thoughts', I think, would only be recommendable when describing a particularly desirable thought process. I understood crystallize to mean elucidate, in this context, but cause for confusion is there.
Thanks! I was sort of using a word experimentally, and it's good to know that it can be a bit confusing. For the record, yes, I did mean it in an elucidate sort of way.