Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Elithrion comments on Welcome to Less Wrong! (2012) - Less Wrong

25 Post author: orthonormal 26 December 2011 10:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1430)

You are viewing a single comment's thread.

Comment author: Elithrion 02 April 2012 09:17:45PM 11 points [-]

Hello there!

I think I first saw LessWrong about three years ago, as it frequently came up in discussions on KW, the forum formerly linked to the Dresden Codak comic. This makes mine one of the longer lurking periods, but I've never really felt the urge to take discussion to the actual posts being discussed and talked about them elsewhere when I felt the need to comment. All this changed when Alicorn told me that when I was asked to make a post relevant to LessWrong that meant I actually had to post it on LessWrong (a revelation which I should have probably anticipated). So it has come to this.

The simplest place to start describing myself is by saying that I'm the type of person that skims through the 200 most recent comments to see which ones are well liked before writing anything.* In real life terms, I've finished up my bachelor's degree in December, after making various errors. Unfortunately, with it finished, I have discovered that I lack motivation to pursue a standard career, since just about the only things I find myself caring about are stories, knowing the future (in the general, not the personal, respect), and understanding things, particularly things related to people. (This is probably not normal for a human, but I can't say I mind it.) Fortunately, these things are fairly similar to the things LW is interested in, so it shouldn't be a problem!

These atypical weights in my utility function do, however, leave me with opinions that I think are largely a lot "darker" than the typical poster (and I don't just write that for sexy bad-boy appeal). For example:

  • I think utilitarianism is a terrible system to base anything on, and is basically what you adopt if you want to say "I think being nice is good" and want to make it sound like a well-reasoned ethical system. I'd like it better if you just said "I think being nice is good".
  • I think democracy and equality under the law merely look like good ideas because we don't yet have the computational power to implement actually good ideas of which these are at best extremely simplified approximations.
  • I think that seemingly obvious statements such as "we are all agreed that [it] is wrong to kill people (meaning, fully conscious and intelligent beings)", from a highly rated comment by Alejandro1 down the page, are not very obvious and require serious justification. I think there are cases in our world where it is completely acceptable to kill people (although admittedly he probably meant his comment to apply only to a very specific subset of killing people), and there are many possible worlds where such cases would be far more frequent.

Well, the first two of those don't even have much to do with my personal preferences. And yet, I'm not a scary person, I promise! While maybe my utility function makes it easier for me to accept these conclusions, the overwhelming majority of my beliefs actually arose from oodles of thinking about the topics, and they are just things that I think are true, regardless of whether I want them to be true or not. That said, when the enraged zealots come for us, I'm pretty sure I'm going to be one of the first to burn at the stake! I also wish that using smiley faces was more acceptable here, since I would not mind adding an equals sign-three one to the end of that sentence to convey the intended mood a little better.

Well, this has already gone on too long already, but I hope you were not too bored. I might as well mention that at the moment, I'm trying to write a realistic post-apocalyptic novel (where the recovery has set in enough that they're ahead of the previous all-time high), and applying for a Center for Modern Rationality helper position, since I think these things are interesting, and I'd like to explore them before moving on to uninteresting survival strategies if necessary.

Bye for now, and I hope we have illuminating conversations together!

*If you're curious what I found, here are the general conclusions (although some of these are fairly low confidence):

  • introductions that include the person's real name are a little bit better liked, but not significantly
  • there is no particular correlation between length and upvotes
  • most introductions reach a rating of 5 over time, even if they're relatively content-free
  • including something that praises LW or HPMoR or the community has a small positive correlation with upvotes
  • introductions which trigger responses of any sort are generally upvoted more (not surprising since they're more visible and overall upvotes per view seem almost universally positive)
  • introductions that describe something fairly unique get noticeably more upvotes
  • general good writing style helps (big surprise there)
  • posts that primarily promote something unrelated to introductions are rated lower
  • mentioning having a PhD or other real-world qualifications seems to be fairly karma-neutral
  • other minor things I have even less confidence in
Comment author: Swimmer963 02 April 2012 10:38:55PM 0 points [-]

I think democracy and equality under the law merely look like good ideas because we don't yet have the computational power to implement actually good ideas of which these are at best extremely simplified approximations.

Yeah, probably. Mainly because it seems likely to me that almost any system in place has better, more optimal alternatives which we don't have the computational power to implement. It is a useful statement in some ways, if only to distinguish ideological, "this-is-sacred", versus instrumental, "this is the best we can do so far" types of beliefs. However, a more useful statement would compare democracy and equality to all the other options that require the same computational power or less.

"we are all agreed that [it] is wrong to kill people (meaning, fully conscious and intelligent beings)"

I unpack this statement to mean that, all other circumstances being equal, it's preferable to accomplish your goals in a way that involves not killing conscious beings. This isn't obvious, really, but it's intuitive to humans, who are generally conscious beings who don't want to be dead and who can empathize with other conscious beings and assume they also don't want to be dead. It's not obvious, I guess, that someone else's consciousness, which I can never experience directly, is comparable in value to my own consciousness, which I experience continually... I find myself unable to break it down any further, though, so I think I must take this as an axiom of my ethical system. Humans have specific brain sub-systems in charge of empathy, which likely evolved for reasons of social cohesion and its survival advantages, and I'm not sure you can break morality any further down than that...but saying those words doesn't cancel the empathy modules either. Empathy would make it hard for me to justify choosing to kill a conscious being right in front of me, and some desire for symmetry or fairness or universality makes my brain want this to be the case everywhere, for all conscious beings, not just those ones immediately in front of me whose life is in my hands. I don't want someone else a thousand miles away to start killing people either, because [insert axiom] their conscious is equal in value to mine, thus in a different possible world I could be them, and I really don't want to get killed. Thus it's wrong.

Make any sense?

since just about the only things I find myself caring about are stories, knowing the future (in the general, not the personal, respect), and understanding things, particularly things related to people.

I've started caring about these things much less since setting out on the process of establishing a standard career. It might be caused by years of working too much while studying full time, and the resulting burnout, or just from having to cram a lot of career-relevant stuff into my head and thus having less room left over for bigger ideas. It might also just be from getting older–during the past few years, I've studied a lot and worked a lot, but I also aged up from adolescence to young adulthood, with the accompanying changes in brain development. I would say be warned, though–forcing yourself to focus on something specific might cause you to lose some of your general curiosity.

I might as well mention that at the moment, I'm trying to write a realistic post-apocalyptic novel (where the recovery has set in enough that they're ahead of the previous all-time high

Sounds fascinating. I'm not sure I've read any post-apocalyptic novels where the current level of development was higher than that before the apocalypse, which is what I'm interpreting. I've completed what I guess could be called a post-apocalyptic novel, though some realistic-ness was compromised in the name of a more exciting and compelling narrative. Best of luck!

Comment author: Elithrion 03 April 2012 03:25:09AM 0 points [-]

However, a more useful statement would compare democracy and equality to all the other options that require the same computational power or less.

This is definitely true. That said, I actually do have at least two systems that I prefer to democracy that are implementable at current processing power levels (they might have somewhat higher needs than democracy, but nothing huge). Equality probably actually does require a lot of processing power to shift completely. However, it is conceivable that we could benefit from creating additional classes of citizens with widely different rights (currently we have children and the mentally ill in this category), although I have not thought about that too much, so I'm not sure if we actually would or not.

I unpack this statement to mean that, all other circumstances being equal, it's preferable to accomplish your goals in a way that involves not killing conscious beings.

Sorry, it was probably bad of me to quote without context. What he actually meant (in my interpretation) was that it is clear that it should be illegal to kill adult human beings, which was part of his argument that it should be illegal to kill infants (search it if you want the full context), so it is with this claim that I took exception. Certainly, I would agree that if all else is equal (a premise that is almost never true, unfortunately), it would be better not to kill people than to kill people. In particular, I think the reason that some view it as possibly okay for parents to kill infants is that the status of infants is close to that of property or pets of their parents. It is here that the analogy breaks down, because our current society does not have adults as pets or property of other adults. However, I think such a situation would be perfectly acceptable - for example, it should be legal for me (in full possession of my faculties, without coercion, etcetera) to sign over to someone else the right to kill me if he or she so chooses. After such a contract is made, I believe it should be completely legal for them to kill me if they wish it. Additionally, we already implicitly provide such rights to any state we enter with some conditions attached (I use a social contract approach here, which is not to indicate I endorse social contracts) - they can kill us if we violently and dangerously resist the police, in some places if we break the law in certain ways, and further the state transfers the right to kill us to private citizens if we attack them and sometimes in other instances. As such, there are indeed many cases when killing people is deemed acceptable and proper, and I think most of these instances are not outrageous.

I'm not sure I've read any post-apocalyptic novels where the current level of development was higher than that before the apocalypse, which is what I'm interpreting. I've completed what I guess could be called a post-apocalyptic novel, though some realistic-ness was compromised in the name of a more exciting and compelling narrative.

Yep, you're interpreting that correctly. Mostly the apocalypse is an extremely well justified for a big shake-up of society without massive technological progress. To be honest, I like fantasy better than science fiction in general, since it explores societies more than it does technology, and I think that is much more appealing in a novel. So, I'm trying to sort of get the best of both worlds - a character driven story exploring interesting societal patterns, and a setting that is somewhat familiar to anyone who knows the modern world, as well as makes them think about where we might head. Although I'm not sure to what extent this thoughtful motivation sprung up after I had a story idea I really liked, which is what really triggered novel writing inspiration. We'll see how it goes anyway, and thanks for the interest!

Comment author: Zaine 02 April 2012 09:57:38PM *  0 points [-]

Hello! I'd welcome you, but I can't honestly represent anything or anyone besides, well, me (I'm a complete neophyte). Really my interest quite piqued at your thoughts on Mr. Bentham's philosophy, as they happen to be the exact opposite conclusion I came to - namely that utilitarianism is essentially for people who think, "Things could be so much better if I ran things." The main logical process that led to this conclusion was: People aren't being logical < If they were logical, they would consider the probability of the net good of an act, and only act if the probability was very high, or just above normal but still low risk < What about contentious issues, based upon value systems? who would make the call on those? < _____.

On that last step I've never really made any progress, as it seems no matter how objective (I consider this word to include the consideration of emotions) and rational you are, on the contentious issues that have no... *

  • ... Sorry, I just had a thought. I remember reading somewhere that for things that have no right or wrong, after the collective evidence has been weighted for accuracy, legitimacy, credibility etcetera, the option(s) that have the greatest probability of truth should be (as a rationalist) treated as truth for the time being; if some new evidence tips the scale in the other direction, so follows the belief. This ... means no religion could be rationally considered true - as of now, at least. Thus any governmental system based upon utilitarianism would only tolerate religion insofar as it affects the emotional welfare of its citizens. And that if either 3,000 innocents or 3 brilliant, Nobel prize winning, humanity-revolutionizing genius scientists absolutely, all other possible and impossible avenues had been taken and failed, had to die, then it would come down to probabilities of each possibility's net good (utility) when deciding which to pick.

I suppose I made a little bit of progress there, so thank you for the kick - but you can see, I hope, how I think utilitarianism is embraced by people who think on the opposite pole of "I think being nice is good". I don't think it's embraced only by those who think they should be running things anymore, though. That changed since the beginning of the post, and since this was a bit about your thought processes in coming your conclusion, I've kept mine.
Cheers!

*The following bracketed fragment completes the thought I was going write, before I cut off the sentence and started from "... Sorry"; it was written ex post facto: [right or wrong, it comes down to the individual value system of the decider(s).]

Comment author: Elithrion 02 April 2012 11:19:45PM 0 points [-]

I think your thought process brings up a few different aspects of evaluating ethical philosophies, and disentangling them would be very helpful.

First, I certainly agree that there are probably people out there that reach utilitarianism through a process of motivated cognition - they want to be in control, and the reason they use (perhaps even to themselves) to make that sound better is that it would be for the good of everyone. However, I also think that there are many other people out there who grew up believing that good is what we should strive for and that the way to do that is to aim for the greatest benefit of the greatest number of people. These types of people might then reach for utilitarianism not to justify actions they wanted to perform already, but rather as what they perceive as the closest complete ethical system to their previous objectives.

While the former group of people merely use utilitarianism as an excuse (even if they believe they believe in it), it is actually the latter group whose reasoning I am generally more concerned about. Whereas the dictator types will do what they planned to do anyway, the forces of good types are vulnerable to taking utilitarianism too seriously, and doing such things as, for example, thinking that maybe it's okay to sacrifice one human life if it will save one million ants (without considering ecosystem impacts), which I do not think is a thought that would have ever arisen from their core belief system. Which is not to say that all utilitarians would agree with that trade-off, but I have seen some who seem like they would, and that is just one minor example of the many problems I have with the idea.

The other point I wanted to bring up is that utilitarianism is really a system for general thinking, even if one likes it, not for immediate real-world implementation. Indeed, it is unimplementable, in the mechanism design sense. So the only way (that I can think of) that you could put it into practice in the real world is to have a strong AI (or equivalent) build detailed models of everyone (possibly involving brain scans) and implement a solution based on those (as otherwise, any implementation would suffer from participants refusing to tell the truth about their utility functions). So, the question of "how would contentious decisions be made?" is fairly unanswerable, except through accepting some deviation from utilitarianism.

I hope that helps crystallize your thoughts a little bit.

Comment author: TheOtherDave 02 April 2012 11:58:46PM 1 point [-]

That said, where there exists a measurable difference between an implementable approximation of utilitarianism and an implementable approximation of some other moral principle X, then it makes sense to consider oneself a utilitarian or an Xian even if one is, as you say, accepting deviations from utilitarianism or X in order to achieve implementability.

Comment author: Zaine 03 April 2012 02:10:16AM *  0 points [-]

Thank you! I'd never really thought of that other (the latter) approach to utilitarianism; that explains a lot.
Nitpick: The use of 'crystallize' in regard to 'thoughts', I think, would only be recommendable when describing a particularly desirable thought process. I understood crystallize to mean elucidate, in this context, but cause for confusion is there.

Comment author: Elithrion 03 April 2012 04:17:58AM 0 points [-]

Thanks! I was sort of using a word experimentally, and it's good to know that it can be a bit confusing. For the record, yes, I did mean it in an elucidate sort of way.

Comment author: TheOtherDave 02 April 2012 09:35:20PM 0 points [-]

Welcome! FWIW, your thoughts about democracy, equality under the law, and the utility of killing people are not uncommon around here. Possibly your thoughts about utilitarianism are as well, although it depends rather a lot on what you consider a better system to base anything on, and on just what you mean by "utilitarianism".

Comment author: Elithrion 02 April 2012 10:42:25PM 0 points [-]

Well, at the very least, I am fairly confident that my particular conclusions about what alternative systems I prefer are not common. As evidence of deviation from the mean, I find myself more in favour of legal infanticide (or even filicide depending on your preferred age ranges for each word) than the most pro-infanticide positions expressed in that big debate down below, which in my case is merely a quick consequence of other, possibly more unusual, positions.

Maybe I'll do a summary in the actual discussion area when I feel up to it, or if people are genuinely curious as to what my positions are.