All of MrCogmor's Comments + Replies

Nutrition is taught in colleges to so people become qualified to become accredited dieticians. You should be able to find a decent undergrad textbook on Amazon. If you get used and an edition behind the current one it should be cheap as well.

https://www.amazon.com/Nutrition-for-Foodservice-and-Culinary-Professionals/dp/0470052422/ref=cm_cr_dp_d_rvw_txt?ie=UTF8

Rationality means achieving your goals and values efficiently and effectively.

The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals.

This is a false dichotomy. Just because a value is not of negative utility doesn't mean it is optimized to benefit the genes. Scott Alexander for example is asexual and there are plenty of gay people.... (read more)

Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains.

Rationality is the art of making better decisions in service to a goal taking into account imperfect information and the constratints of our mental hardware. When applied to utilitarianism you get posts like this Nobody is perfect, evertyhing is commensurable

Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals.

I don't see how this f... (read more)

2PhilGoetz
Start by saying "rationality" means satisficing your goals and values. The issue is what values you have. You certainly have selfish values. A human also has values that lead to optimizing group survival. Behavior oriented primarily towards those goals is called altruistic. The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals. As to old Utilitarianism 1.0, where somebody just declares by fiat that we are all interested in the greatest good for the greatest number of people--that isn't on the table anymore. People don't do that. Anyone who brings that up is the one asserting an "ought" with no justification. There is no need to talk about "oughts" yet.

It sounds less like he rewrote his natural morality and more like he engaged in a lot of motivated reasoning to justify his selfish behaviour. Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains. The idea that other people don't have worth because they aren't as prosocial as you is not Rational Utilitarianism (especially when you aren't actually prosocial because you don't value other people).

If whoever it is can't feel much sympathy for people in distant countries then that ... (read more)

3PhilGoetz
No; I object to your claiming the term "rational" for that usage. That's just plain-old Utilitarianism 1.0 anyway; it doesn't take a modifier. Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals. You can't call trying to achieve the greatest good for the greatest number of people "rational" for an evolved organism.

I've had the experience where I read for a long time and then go talk to people and my voice doesn't work correctly on the first try and is barely audible. I assume it is because my brain got too good at suppressing subvocalization while reading.

That was my point. Philosophy uses subjective words in order to confuse meanings. Once you translate it into one of it's objective interpretations it becomes simple. A good example is the concept of free will.

0TheAncientGeek
No. Consciousness is subjective as a thing. If you disregard a thing essential characteristic, it is you who are confusing yourself,
1[anonymous]
What is an 'objective interpretation' of a concept?

Present the complicated problem and then break it down into understandable parts. Much of philosophy is basic but not widely understood because it is obfuscated by multiple meanings and ends up arguing about definitions such as "What is consciousness?". It is helpful to disambiguate these questions by choosing an objective interpretation and then answering that. For example "What is consciousness?" can be defined as "What makes a creature aware of it's environment?" "What process produces thoughts?" "What process produces sensation"?

-1TheAncientGeek
Consciousness is subjective, so that approach misses the mark.

In the second paragraph of the quote the author ignores the whole point of replication efforts. We know that scientific studies may suffer from methodological errors. The whole point of replication studies are to identify methodological errors. If they disagree then you know there is an uncontrolled variable or methodological mistake in one or both of them, further studies and the credibility of the experimenters is then used to determine which result is more likely to be true. If the independent studies agree then it is evidence that they are both correct... (read more)

5DanArmak
Original, non-replication studies are mostly made by people who agree with what their studies are showing. (Also, publication bias.) So this is not a reason to think replication studies are particularly biased.

The Mike story can be considered an example of the halo effect if you assume that Mike can interpret the obtuse language better than Jessica can because of his morality. On the other hand if Jessica interpreted it herself she would probably have gotten the same wrong impression of the law as Mike.

Or it could be that Mike interpreted the law correctly but has a few quirks in his morality that you don't. In which case it is not the case of the halo effect and more of a generalization or heuristic failing in a specific instance.

Tvtropes:Broken pedestal has some good examples of the halo effect.

The point of it wasn't to say that people like meat. The point was that people have or expect akrasia from not eating meat enough that they search Google and ask people on question sites for help.

I used to believe like you that if you believe something is morally good then you would do it. That axiom used to be a corner stone in my model of morality. There was actually a stage in my life where my moral superiority provided most of my self esteem and disobeying it was unthinkable. When I encountered belief in belief I couldn't make sense of it at all. I w... (read more)

I searched for "I want to be vegan but love meat" It was in google autocomplete and has plenty of results including this Yahoo answers page which explicitly mentions that the poster wants to be a vegetarian for ethical reasons.

0blacktrance
I don't think that's a counterexample. If I had a billionaire uncle who willed me his fortune, I could say something like "I like money but I don't want to commit murder" - and then I wouldn't commit murder. Liking the taste of meat and still abstaining from it because you think eating it is evil is similar.

I think you are confusing logical and behavioral consistency here. The OP meant inconsistent in the logical sense, while you are thinking of behavioral consistency. Another context for consistency is matter, where consistency refers to the viscosity of the material. In each case it refers to how resilient (or resistant to damage) something is.

Error isn't implying that the final state is different. Just that the destructive copy process is a form of death and the wired brain process isn't.

I get where he is coming from, a copy is distinct from the original and can have different experiences. In the destructive copy scenario a person is killed and a person is born, In the wired brain scenario the person is not copied they merely change over time and nobody dies.

My view is that if I die to make a upload (which is identical to me except for greater intelligence & other benefits) then I think the gain outweighs the loss.

Humans would be considered UFAI if they were digitised. Merely consider a button that picks a random human and gives them absolute control. I wouldn't press that button because their is a significant chance that such a person will have goals that significantly differed from my own.

I thought of this moral dillema

There are two options.

  1. You experience a significant amount of pain, 5 minutes later you completly forget about the experience as if you were never in pain at all.
  2. You experience a slightly less amount of pain then option 1 but you don't forget it.

Which one would you choose?

MrCogmor120

You are right in the sense that playing at the casino doesn't give your friend an extra four dollars but since utility is relative it depends on your perspective. Allow me to explain

This demonstrates your view. C is the money lost or gained by playing at the casino.

Pay Parking outcome = -4$, Casino outcome = C

And this demonstrates your friends view

Pay Parking outcome = 0 , Casino outcome = C+4 (It's the same as your view but +4 has been added to both sides)

If you apply a modifier (in this case +4) to all choices then the difference between them stays ... (read more)