All of ChaosMote's Comments + Replies

Since the question is about potential dangers, I think it is worth assuming the worst here. Also, realistically, we don't have a magic want to pop things into existence by fiat so I would guess that by default if such an AI was created it would be created with ML. 

So lets say that this is trained largely autonomously with ML. Is there some way that would result in dangers outside the four already-mentioned categories?

3Charlie Steiner
Well, you might train an agent that has preferences about solving theorems that it generalizes to preferences about the real world. Then you'd get something sort of like problem #3, but with way broader possibilities. You don't have to say "well, it was trained on theorems, so it's just going to affect what sorts of theorems it gets asked." It can have preferences about the world in general because it's getting its preferences by unintended generalization. Information leaks about the world (see acylhalide's comment) will lead to creative exploitation of hardware, software, and user vulnerabilities.

Clearly, you and I have different definitions of "easy".

This was a terrific post; insightful and entertaining in excess of what can be conveyed by an upvote. Thank you for making it.

What you're proposing sounds more like moral relativism than moral nihilism.

Ah, yes. My mistake. I stand corrected. Some cursory googling suggests that you are right. With that said, to me Moral Nihilism seems like a natural consequence of Moral Relativism, but that may be a fact about me and not the universe, so to speak (though I would be grateful if you could point out a way to be morally relativist without morally nihilist).

I think that you're confusing moral universalism with moral absolutism and value monism.

The last paragraph of my previous p... (read more)

But there's that language again that people use when they talk about moral nihilism, where I can't tell if they're just using different words, or if they really think that morality can be whatever we want it to be, or that it doesn't mean anything to say that moral propositions are true or false.

Okay. Correct me if any of this doesn't sound right. When a person talks about "morality", you imagine a conceptual framework of some sort - some way of distinguishing what makes actions "good" or "bad", "right" or "w... (read more)

1Gram_Stone
What you're proposing sounds more like moral relativism than moral nihilism. I think that you're confusing moral universalism with moral absolutism and value monism. If a particular individual values eating ice cream and there are no consequences that would conflict with other values of this individual for eating ice cream in these particular circumstances, then it is moral for that individual to eat ice cream, and I do not believe that it makes sense to say that it is not meaningful to say that it is true that it is moral for this individual to eat ice cream in these circumstances. This does not mean that there is some objective reason to value eating ice cream or that regardless of the individual or circumstances that it is true that it is moral to eat ice cream. The sense in which morality is universal is not on the level of actions or values, but on the level of utility maximization, and the sense in which it is objective is that it is not whatever you want it to be.

I think that using this notation is misleading. If I am understanding you correctly, you are saying that given an individual, we can derive their morality from their (real/physically grounded) state, which gives real/physically grounded morality (for that individual). Furthermore, you are using "objective" where I used "real/physically ground". Unfortunately, one of the common meanings of objective is "ontologically fundamental and not contingent", so your statement sounds like it is saying something that it isn't.

On a separa... (read more)

0Gram_Stone
I used 'objective and contingent' instead of 'subjective' because ethical subjectivists are usually moral relativists. I noted that I was referring to an objective morality that is contingent rather than ontologically fundamental. But there's that language again that people use when they talk about moral nihilism, where I can't tell if they're just using different words, or if they really think that morality can be whatever we want it to be, or that it doesn't mean anything to say that moral propositions are true or false. I wouldn't ask people those questions. People can be wrong about what they value. The point of moral philosophy is to know what you should do. It's probably best to do away with the old metaethical terms and just say: To say that you should do something is to say that if you do that thing, then it will fulfill your values; you and other humans have slightly different values based on individual, cultural and perhaps even biological differences, but have relatively similar values to one another compared to a random utility function because of shared evolutionary history.

Okay, but at best, this shows that the immediate cause of you being shaken and coming out of it is related to fearful epiphanies. Is it not plausible that the reason that, at a given time, you find particular idea horrific or are able to accept a solution as satisfying depending on your mental state?

Consider this hypothetical narrative. Let Frank (name chosen at random) be a person suffering from occasional bouts of depression. When he is healthy, he notices an enjoys interacting with the world around him. When he is depressed, he instead focuses on real ... (read more)

0Fivehundred
I find it hard to believe. But maybe I've always been depressed and that's why I've suffered from them so badly.

@gjm:

Just wanted to say that this is well thought out and well written - it is what I would have tried to say (albeit perhaps less eloquently) if it hadn't been said already. I wish I had more than one up-vote to give.

@Eitan_Zohar:

I would urge you to give the ideas here more thought. Part of the point here is that from you are going to be strongly biased for thinking your explanations are of the first sort and not the second. By virtue of being human, you are almost certainly biased in certain predictable ways, this being one of them. Do you disagree?

Let ... (read more)

0gjm
Thanks! One upvote per reader is plenty enough for me :-).

I definitely know that my depression is causally tied to my existential pessimism.

Out of curiosity, how do you know that this is the direction of the causal link? The experiences you have mentioned in the thread seem to also be consistent with depression causing you to get hung up on existential pessimism.

1Fivehundred
I go through long periods of peace, only to find my world completely shaken as I experience some fearful epiphany. And I've experienced a complete cessation of that feeling when it is decisively refuted.

Your argument assumes that the algorithm and the prisons have access to the same data. This need not be the case - in particular, if a prison bribes a judge to over-convict, the algorithm will be (incorrectly) relying on said conviction as data, skewing the predicted recidivism measure.

That said, the perverse incentive you mentioned is absolutely in play as well.

0benkuhn
Yes, I glossed over the possibility of prisons bribing judges to screw up the data set. That's because the extremely small influence of marginal data points and the cost of bribing judges would make such a strategy incredibly expensive.
ChaosMote310

Great suggestion! That said, in light of your first paragraph, I'd like to point out a couple of issues. I came up with most of these by asking the questions "What exactly are you trying to encourage? What exactly are you incentivising? What differences are there between the two, and what would make those difference significant?"

You are trying to encourage prisons to rehabilitate their inmates. If, for a given prisoner, we use p to represent their propensity towards recidivism and a to represent their actual recidivism, rehabilitation is represen... (read more)

3Houshalter
I really, really wish policy makers considered possible perverse incentive exploits in this much detail. Though I'm not convinced there is any perfect policy that has zero exploits.
3benkuhn
If past criminality is a predictor of future criminality, then it should be included in the state's predictive model of recidivism, which would fix the predictions. The actual perverse incentive here is for the prisons to reverse-engineer the predicted model, figure out where it's consistently wrong, and then lobby to incarcerate (relatively) more of those people. Given that (a) data science is not the core competency of prison operators; (b) prisons will make it obvious when they find vulnerabilities in the model; and (c) the model can be re-trained faster than the prison lobbying cycle, it doesn't seem like this perverse incentive is actually that bad.
4ChristianKl
I don't think that would be a problem. If more people get a good legal defense the system becomes more fair. But even if you don't like that, you can set up the rule that the prison doesn't get the bonus if the prisoner is charged with a crime. You don't need to tie the bonus to conviction.

The incentive to try "high volatility" methods seems like an advantage; if many prisons try them, 20% of them would succeed, and we'd learn how to rehabilitate better.

You are of course entirely correct in saying that this is far too little to retire on. However, it is possible to save without being able to liquidate said saving; for example by paying down debts. The Emergency Fund advice is that you should make a point to have enough liquid savings tucked away to tide you over in a financial emergency before you direct your discretionary income anywhere else.

1[anonymous]
Ah... I see. We keep most of our savings liquid. Safe i.e. government guaranteed investments at the biggest banks here are like 0.5% a year (the Kapitalsparbuch thing here in Austria), sot I don't give a damn. And I would rather not gamble on the stock exchange. If I would see inflation I would care, but then I would also see more decent interest rates.

I'm afraid I don't know. You might get better luck making this question a top level post.

ChaosMote140

I am by no means an expert, but here are a couple of options that come to mind. I came up with most of these by thinking "what kind of emergency are you reasonably likely to run into at some point, and what can you do to mitigate them?"

... (read more)
1Lumifer
I would like to see some data on whether they are useful, that is, how likely are you to find yourself in a situation where having them in your glove compartment will be important.
0Jiro
How do you determine whether a seat belt cutter/window breaker is a good one? Should you test it on an old rag or something?
2[anonymous]
Split it into "commutes by car" and "commutes by public transport". I know when I used to own a car I was ridiculously prepared, even having a shovel in the trunk. Now with the subway, basically nothing - I have a whole city full of services to help me or anyone else in need. Or five hundred people on the subway train with various skills and items.
4[anonymous]
3-6 months? People don't go on piling up savings indefinitely? How else do you retire? I mean... there is state pension in the country I live in but I would not count it not going bust in 30 years so I always assumed I will have what I save and then maybe the state pays a bonus.
2Gunnar_Zarncke
Very good ideas. Could be improved upon thus: * seat belt and window cutter for your key ring - always present, in the bus, train, other peoples cars. * Practice emergency procedures. To be actually able to perform them under stress. * Always carry a compact emergency kit with band-aid and one or two pads. Possibly a rescue blanket in your backpack. * Always have some cash handy (may depend on your country, municipality).

You make a good point, and I am very tempted to agree with you. You are certainly correct in that even a completely non-centralized community with no stated goals can be exclusionary. And I can see "community goals" serving a positive role, guiding collective behavior towards communal improvement, whether that comes in the form of non-exclusiveness or other values.

With that said, I find myself strangely disquieted by the idea of Less Wrong being actively directed, especially by a singular individual. I'm not sure what my intuition is stuck on, bu... (read more)

0casebash
"With that said, I find myself strangely disquieted by the idea of Less Wrong being actively directed, especially by a singular individual." - the proposal wasn't that a single individual would choose the direction, but that there would be a group.

While your family's situation is explained by lack of scope insensitivity, I'd like to put forward an alternative. I think the behavior you described also fits with rationalization. If you family had already made up their mind about supporting the Republican party, they could easily justify it to themselves (and to you) by citing a particular close-to-the-heart issue as an iron-clad reason.

Rationalization also explains why "even people who bother thinking for themselves are likely to arrive at the same conclusion as their peers" - it just means that said people are engaging in motivated cognition to come up with reasonable-sounding arguments to support the same conclusions as their peers.

1[anonymous]
Yeah, but if my mom's parents were on one side of the fence, that would make it less likely for her to hop to the other side, right? She did seem like she thought the democrats were probably right about some things, but that those things were dwarfed by the larger issue. So I'm still mostly convinced this instance was a lack of scope insensitivity. Ah yes, good point. I'm guilty too, haha. A few years ago I engaged in some motivated cognition to convince myself there were solid secular reasons to oppose gay marriage, since everyone I knew and respected was against it even though they claimed to believe in the separation of church and state.

Interesting point! It seems obvious in hindsight that if you reward people for making predictions that correspond to reality, they can benefit both by fitting their predictions to reality or fitting reality to their predictions. Certainly, it is an issue that come up even in real life in the context of sporting betting. That said, this particular spin on things hadn't occurred to me, so thanks for sharing!

ChaosMote100

I think the issue you are seeing is that Less Wrong is fundamentally a online community / forum, not a movement or even a self-help group. "Having direction" is not a typical feature of such a medium, nor would I say that it would necessary be a positive feature.

Think about it this way. The majority of the few (N < 10) times I've seen explicit criticism of Less Wrong, one of the main points cited was that Less Wrong had a direction, and that said direction was annoying. This usually refereed to Less Wrong focusing on the FAI question and X-ri... (read more)

2Rob Bensinger
I think these concerns are good if we expect the director(s) (/ the process of determining LessWrong's agenda) to not be especially good. If we do expect them the director(s) to be good, then they should be able to take your concerns into account -- include plenty of community feedback, deliberately err on the side of making goals inclusive, etc. -- and still produce better results, I think. If you (as an individual or as a community) don't have coherent goals, then exclusionary behavior will still emerge by accident; and it's harder to learn from emergent mistakes ('each individual in our group did things that would be good in some contexts, or good from their perspective, but the aggregate behavior ended up having bad effects in some vague fashion') than from more 'agenty' mistakes ('we tried to work together to achieve an explicitly specified goal, and the goal didn't end up achieved'). If you do have written-out goals, then you can more easily discuss whether those goals are the right ones -- you can even make one of your goals 'spend a lot of time questioning these goals, and experiment with pursuing alternative goals' -- and you can, if you want, deliberately optimize for inclusiveness (or for some deeper problem closer to people's True Rejections). That creates some accountability when you aren't sufficiently inclusive, makes it easier to operationalize exactly what we mean by 'let's be more inclusive', and makes it clearer to outside observers that at least we want to be doing the right thing. (This is all just an example of why I think having explicit common goals at all is a good idea; I don't know how much we do want to become more inclusive on various axes.)

I consider philosophy to be a study of human intuitions. Philosophy examines different ways to think about a variety of deep issues (morality, existence, etc.) and tries to resolve results that "feel wrong".

On the other hand, I have very rarely heard it phrased this way. Often, philosophy is said to be reasoning directly about said issues (morality, existence, etc.), albeit with the help of human intuitions. This actually seems to be an underlying assumption of most philosophy discussions I've heard. I actually find that mildly disconcerting, giv... (read more)

2[anonymous]
But obviously morality or existence is not a thing "out there" that we can just examine and see if they are red or blue, they are things in the mind. Mental constructs. Maps, if you like it that way. Ultimately, just words that mean something but that something is not a simple sensory input .Reifying them is pretty much an automatic fail. So the investigation begins at the assumption that they are made by the mind, hence what we are trying to learn is how the mind makes them. Philosophers may not admit it, but act as if they did: if a proposal leads to results we find absurd, it is abandoned. If we find nothing exists and we should eat babies, we abandon that line of thought because it did not do its job, it did not predict how we feel about things. What else could it be about if not the mind? If the mind finds something absurd, that is only relevant to truth if it is a truth about the mind. External physical reality is allowed to feel weird to our minds, it is only our minds themselves that not .
ChaosMote130

To address your first question: this has to do with scope insensitivity, hyperbolic discounting, and other related biases. To put it bluntly, most humans are actually pretty bad at maximizing expected utility. For example, when I first head about x-risk, my thought process was definitely not "humanity might be wiped out - that's IMPORTANT. I need to devote energy to this." It was more along the lines of "huh; That's interesting. Tragic, even. Oh well; moving on..."

Basically, we don't care much about what happens in the distant future, ... (read more)

It's also possible that people might reasonably disagree with one or more of MIRI's theses.

No, I do not believe that it is standard terminology, though you can find a decent reference here.

Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.

0Lukas_Gloor
Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.

Honestly, I suspect that the average person models others after themselves even if they consider themselves to be unusual. So this poll probably shouldn't be used as evidence to shift how similarly we model others to ourselves, one way or another.

That was awesome - thank you for posting the poll! The results are quire intriguing (at N = 18, anyway - might change with more votes, I guess).

0Elo
N=50 we appear to be making a bell curve. Also no one thinks they are typical.
0Elo
I predict the results to stay near to where they are (at N=20) however what this means for how we might better model people is unclear. (it might be reasonable to think this subset of population is in fact a collection of unusual thinkers but I would say its safe to assume that this is representative of most of the population in this case) Do we need to start modelling people as more to ourselves (as we all seem to feel like we have unusual though processes) or less (as we might have unusual processes in different directions to each other)? would doing either make us more effective at life?

Your best bet would be to find some sort of channel for communicating with your future self that your adversary does not have access to. Other posters mentioned several such examples, with channels including:

  • keeping your long term memories (assuming that the memories couldn't be tampered with by the adversary)
  • Swallowing a message, getting it as a tattoo, etc. (assuming that the adversary can't force you to do that)
  • Using some sort of biometric lock (assuming that the adversary can't get a proper sample with causing detectable alternations to your blood
... (read more)

I think you are being a little too exacting here. True, most advances in well-studied fields are likely to be made by experts. That doesn't mean that non-experts should be barred from discussing the issue, for educational and entertainment purposes if nothing else.

That is not to say that there isn't a minimum level of subject-matter literacy required for an acceptable post, especially when the poster in question posts frequently. I imagine your point may be that Algon has not cleared that threshold (or is close to the line) - but your post seems to imply a MUCH higher threshold for posting.

0Shmi
Indeed. And a more appropriate tone for this would be "How is addressed in the current AI research?" and "where can I find more information about it?" not "I cannot find anything wrong with this idea". To be fair, the OP was edited to sound less arrogant, though the author's reluctance to do some reading even after being pointed to it is not encouraging. Hopefully this is changing.

I'm not convinced that the solution you propose is in fact easier than solving FAI. The following problems occur to me:

1) How to we explain to the creator AI what an FAI is? 2) How do we allow the creator AI to learn "properly" without letting it self modify in ways that we would find objectionable? 3) In the case of an unfriendly creator AI, how do we stop it from "sabotaging" its work in a way that would make the resulting "FAI" be friendly to the creator AI and not to us?

In general, I feel like the approach you outline just... (read more)

-1Algon
You may have a point there. But I think that the problem's you've outlined are ones that we could circumvent. With 1) We don't know exactly how to describe what an FAI should do, or be like, so we might present an AI with the challenge of 'what would an FAI be like for humanity?' and then use that as a goal for FAI research. 2) I should think that its technically possible to construct it in such a way so that it can't just become a super-intellect, whilst still allowing it to grow in pursuit of its goal. I would have to think for a while to present a decent starting point to this task, but I think it is more reasonable than solving the FAI problem. 3) I should think that this is partly circumvented by 1) If we know what it should look like, we can examine it to see if its going in the right direction. Since it will be constructed by a human level intellect, we should notice any errors. And if anything does slip through, then the next AI would be able to pick it up. I mean, that's part of the point of the couple of years (or less) time limit; we can stop an AI before it becomes too powerful, or malevolent, and the next AI would not be predisposed in that way, so we can make sure it doesn't happen again. Thanks for replying, though. You made some good points. I hope I have adjusted the plan so that it is more to your liking (not sarcastic, I just didn't know the best way to phrase that) EDIT: By the way, this is strictly a backup. I am not saying that we shouldn't pursue FAI. I'm just saying that this might be a reasonable avenue to pursue if it becomes clear that FAI is just too damn tough.

You are absolutely correct. If the number of states of the universe is finite, then as long as any state is reachable from any other state, then every state will be reached arbitrarily often if you wait long enough.

ChaosMote100

Mathematician here. I wanted to agree with @pianoforte611 - just because you have infinite time doesn't mean that every event will repeat over and over.

For those interested in some reading, the general question is basically the question of Transience in Markov Chains; I also have some examples. :)

Let us say that we have a particle moving along a line. In each unit of time, it moves a unit of distance either left or right, with probability 1/10 of the former and 9/10 of the latter. How often can we expect the particle to have returned to its starting point... (read more)

4Kindly
What if we assume a finite universe instead? Contrary to what the post we're discussing might suggest, this actually makes recurrence more reasonable. To show that every state of a finite universe recurs infinitely often, we only need to know one thing: that every state of the universe can be eventually reached from every other state. Is this plausible? I'm not sure. The first objection that comes to mind is entropy: if entropy always increases, then we can never get back to where we started. But I seem to recall a claim that entropy is a statistical law: it's not that it cannot decrease, but that it is extremely unlikely to do so. Extremely low probabilities do not frighten us here: if the universe is finite, then all such probabilities can be lower-bounded by some extremely tiny constant, which will eventually be defeated by infinite time. But if the universe is infinite, this does not work: not even if the universe is merely potentially infinite, by which I mean that it can grow to an arbitrarily large finite size. This is already enough for the Markov chain in question to have infinitely many states, and my intuition tells me that in such a case it is almost certainly transient.

Sorry - hadn't logged in for a while. I thought it would have vanishingly low probability of working, though I don't believe that it displaces any other action likely to work (though it does displace saving a person if all else fails, which has nontrivial value). Having said that, curiously enough it seems that this particular suggestion WAS implemented in the official solution, so I guess that was that. :)

I don't believe leveraging Voldemort's bargain will work the way you suggest, because Parseltongue does not enforce promises, only honesty. When Harry demands that he himself be saved, Voldemort can simply say "No."

1Benquo
Right - but how low do you think the probability is, and what's the best action it displaces?

You make a good point - in this instance, Voldemort is very much difficult to bargain with. However, I don't agree that that makes the problem impossible. For one thing, there may be solutions which don't require Voldemort's cooperation - e.g. silent transfiguration while stalling for time. For another, Harry can still get Voldemort's cooperation by convincing Voldemort that his current action is not in Voldemort's interests - for example, that killing Harry will actually bring about the end of the world that Voldemort fears.

I think in this case, you and Eliezer are both correct, but for different definitions of "winning". If one's primary goal is to find a solution to the puzzle (and get the good ending), then your advice is probably correct. However, if the goal to stimulate the experience of having to solve a hard problem using one's intellect, then Eliezer's advice seems more valid. I imagine that this is in the same way that one might not want to look up a walkthrough for a game - it would help you "win" the game, but not win at getting the most benefit/enjoyment out of it.

I thought of the idea that maybe the human decision maker has multiple utility functions that when you try to combine them into one function some parts of the original functions don't necessarily translate well... sounds like the "shards of desire" are actually a bunch of different utility functions.

This is an interesting perspective, and I would agree that we humans typically have multiple decision criteria which often can't be combined well. However, I don't think it is quite right to call then utility functions. Humans are adaptation-execut... (read more)

Thank you! That is exactly what I was looking for.

I'm having trouble finding the original sequence post that mentions it, but a "fully general excuse" refers to an excuse that can be applied to anything, independently of the truth value of the thing. In this case, what I mean is that "this isn't really the important stuff" can sound reasonable even when applied to the stuff that actually is important (especially if you don't think about it too long). It follows that if you accept that as a valid excuse but don't keep an eye on your behavior, you may find yourself labeling whatever you don't want to do at the moment as "not really important" - which leads to important work not getting done.

9VincentYu
The post is "Knowing About Biases Can Hurt People". See also the wiki page on fully general counterarguments.

For a while now, I've been spending a lot of my free time playing video games and reading online fiction, and for a while now, I've considered it a bad habit that I should try to get rid of. Up till now, I've been almost universally unsuccessful at maintaining this resolve for any length of time.

My latest attempt consisted of making the commitment public to my closest friends, explaining the decision to them, and then asking them to help by regularly checking up on my progress. This has been more effective than anything else I've tried so far.

Usually, whe... (read more)

I have two pieces of advice for you. Please take them with a grain of salt - this is merely my opinion and I am by no means an expert in the matter. Note that I can't really recommend that you do things one way or another, but I thought I would bring up some points that could be salient.

1) When thinking about the coding job, don't put a lot of emphasis on the monetary component unless you seriously need the money. You are probably earning less than you would be in a full time job, and your time is really valuable at the moment. On the other hand, if you n... (read more)

0[anonymous]
My advice would be to get just a minor in Math so you can get some easy electives in. if you're considering a career change out of computer science as a possibility and just creating options then there are very few grad schools that would accept a math major but not a math minor. In the job market, a CS plus math minor is probably not going to be a big difference from a double major. This may not be the case for all possibilities but it should hold for most of them. Certainly most employers are going to compliment you then shrug for programming jobs.
1ausgezeichnet
Thanks for the response. Re: 1) I'm not as focused on the money as on the programming opportunities it might later lead to. Re: 2) I agree with everything here. What do you mean in your last sentence?

Pretty much any such moral standard says that you must be better than him

Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn't be that surprising - really, it is just the claim that utilitarianism may not align well with our intuitions.

Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If ... (read more)

As far as I understand it, the text quoted here is implicitly relying on the social imperative "be as moral as possible". This is where the "obligatory" comes from. The problem here is that the imperative "be as moral as possible" gets increasingly more difficult as more actions acquire moral weight. If one has internalized this imperative (which is realistic given the weight of societal pressure behind it), utilitarianism puts an unbearable moral weight on one's metaphorical shoulders.

Of course, in reality, utilitarianism imp... (read more)

0Jiro
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he's producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%. If you avoid utilitarianism you can describe being "better than" the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.

It might be useful to distinguish between a "moral theory" which can be used to compare the morality of different actions and a "moral standard" which is a boolean rule use to determine what is morally 'permissible' and what is morally 'impermissible'.

I think part of the point your post makes is that people really want a moral standard, not a moral theory. I think that makes sense; with a moral system, you have a course of action guaranteed to be "good", whereas a moral theory makes no such guarantee.

Furthermore, I suspect t... (read more)

Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.

Hello, all!

I'm a new user here at LessWrong, though I've been lurking for some time now. I originally found LessWrong by way of HPMOR, though I only starting following the site when one of my friends strongly recommended it to me at a later date. I am currently 22 years old, fresh out of school with a BA/MA in Mathematics, and working a full-time job doing mostly computer science.

I am drawn to LessWrong because of my interests in logical thinking, self improvement, and theoretical discussions. I am slowly working my way through the sequences right now - s... (read more)

Get in the habit of smiling. Smile when you are greeting someone, smile at the cashier when they ring up your groceries, smile to yourself when you are alone.

As far as I understand it, the physical act of smiling (for whatever reason) improves your mood. Personally, I've tried to make it a point to smile whenever it occurs to me, and I've found that it generally improves my day. In particular, I find myself feeling more positive and optimistic.

0obvious
"Smile with your eyes" as an alternative.