Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: adamzerner 04 April 2014 03:33:57AM *  2 points [-]

I just finished reading Eliezer's April Fools Day post, where he illustrated how good society could be. A future society filled with rational people, that is structured the way Eliezer describes, and continues with linear progression in technology would be pretty amazing. What is it that the intelligence explosion would provide of value that this society wouldn't?

Put differently, diff(intelligenceExplosion, dath ilan).

Comment author: shokwave 04 April 2014 09:59:30AM 1 point [-]

Well, in dath ilan, people do still die, even though they're routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).

Comment author: JQuinton 31 March 2014 04:20:00PM 0 points [-]

That's actually a good question. Without disclosing too much of her psych history, she seems to be really impulsive and might even be prone to addiction. I suppose she could get an exercise disorder... this makes it even more complicated than I thought.

Comment author: shokwave 31 March 2014 05:06:49PM 1 point [-]

I'd caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).

Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).

There are a few approaches that might work for different people:

  • Talk as though she doesn't have anorexia. Since you are aware, you can tailor your message to avoid saying anything seriously upsetting (i.e you can present the diet assuming control of diet is easy, or assuming control of diet is hard). I don't recommend this approach.
  • Confront the issue directly ("Exercise is what tells your body to grow muscle, but food is what muscles are actually built out of, so without a caloric surplus your progress will be slow. I'm aware that this is probably a much harder challenge for you than most people..."). I don't recommend this approach.
  • Ask her how she feels about discussing diet. ("Do you feel comfortable discussing diet with me? Feel free to say no. Also, don't feel constrained by your answer to this question; if later you start wishing you'd said no, just say that, and I'll stop."). I recommend this approach.

In any case, make it clear from the outset you want to be respectful about it.

Comment author: shokwave 21 March 2014 05:38:04AM *  -1 points [-]

It seems like the War on Terror, etc, are not actually about prevention, but about "cures".

Some drug addiction epidemic or terrorist attack happens. Instead of it being treated as an isolated disaster like a flood, which we should (but don't) invest in preventing in the future, it gets described as an ongoing War which we need to win. This puts it firmly in the "ongoing disaster we need to cure" camp, and so cost is no object.

I wonder if the reason there appears to be a contradiction is just that some policy-makers take prevention-type measures and create a framing of "ongoing disaster" around it, to make it look like a cure (and also to get it done).

Comment author: shokwave 18 March 2014 04:08:23PM *  1 point [-]

One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.

This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.

If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed - at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).

So we don't know if an action is good or not until it's in the past. But we can only choose future actions! What's a consequentialist to do? (Equivalently, since we don't know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.

The next step is to say that we want as many good things to happen as possible, so "expected value calculations" is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and "wishful thinking" is an incorrect way of making decisions.

So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.

The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person's decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.

--

I think it's very important for consequentialists to always remember that an action's moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.

If you let it collapse - if you take the shortcut and say "making the best decision you could is all you can do", then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes' theorem needs to go out the window.

Comment author: CellBioGuy 14 March 2014 01:45:13AM *  6 points [-]

Unsure what you mean by the 'just'. Should it be more, and what is different about how we value morality based on its origin?

Comment author: shokwave 14 March 2014 05:02:52AM -1 points [-]

There's no other source of morality and there's no other criterion to evaluate a behaviour's moral worth by. (Theorised sources such as "God" or "innate human goodness" or "empathy" are incorrect; criteria like "the golden rule" or "the Kantian imperative" or "utility maximisation" are only correct to the extent that they mirror the game theory evaluation.)

Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are immoral.

what is different about how we value morality based on its origin?

Evolution, either genetic or cultural, doesn't have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.

Comment author: blacktrance 13 March 2014 06:38:51PM *  0 points [-]

By "concept of morality", do you mean moral intuitions or the output of ethical theories?

Comment author: shokwave 14 March 2014 04:55:40AM 0 points [-]

Sorry, I was trying to get at 'moral intuitions' by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions - to try and come up with a parsimonious theory that would have produced these behaviours - and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.

Comment author: CellBioGuy 12 March 2014 02:03:05PM *  5 points [-]

Irrationality game:

There are other 'technological civilizations' (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%

There are other 'technological civilizations' in our own galaxy: 75% with most of the probability mass in regimes where there are somewhere between dozens and thousands.

Conditional on these existing: Despite some being very old, they are limited by the hostile nature of the universe and the realities of practical manipulation of matter and energy to never controlling much matter outside the surfaces of life-bearing worlds, and either never leave their solar systems of origin with anything self-replicating or their replicators on average produce less than 1 seed to continue. 95%

Humanity has already received and recorded a radio signal from another thing-analogous-to-a-technological-civilization. This was either unnoticed or not unequivocally recognized as such due to some combination of very short duration, being a one-off event that was never repeated, being modulated in a way that the receiver was not looking for, or being indistinguishable from terrestrial radio noise. 20%.

Conditional on the above, the "Wow!" signal was such a signal. 20%.

Comment author: shokwave 13 March 2014 04:25:46PM 0 points [-]

Even given other technological civilisations existing, putting "matter and energy manipulation tops out a little above our current cutting edge" at 5% is way off.

Comment author: shokwave 13 March 2014 04:20:41PM *  -3 points [-]

Irrationality game: Humanity's concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner's Dilemma. 85% confident.

Comment author: alicey 01 March 2014 04:28:32PM *  4 points [-]

i tend to express ideas tersely, which seems to count as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me

i have mostly stopped posting or commenting on lesswrong and stackexchange because of this

like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."

revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)

Comment author: shokwave 03 March 2014 05:16:35AM 4 points [-]

so they round me off to the nearest cliche

I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.

For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".

Comment author: Pablo_Stafforini 28 February 2014 05:17:41PM *  0 points [-]

I think that for the purposes of assessing the claim in question ("Eggs and whole milk are very nutrient dense"), unfortified versions of those foods should be considered. Otherwise, we should also regard cereals and many other foods as "very nutrient dense", simply because manufacturers decide to fortify them in all sorts of ways. (And I note that it's generally not a good idea to obtain your nutrients from supplements when you can obtain them from real food instead.)

In any case, even if we used data for fortified milk, it would still be false, in my opinion, that "whole milk is very nutrient dense." Vitamin D levels make a minor contribution to overall nutritional density.

Comment author: shokwave 02 March 2014 12:26:31AM *  0 points [-]

I suspect the real issue is using the "nutrients per calorie" meaning of nutrient dense, rather than interpreting it as "nutrients per some measure of food amount that makes intuitive sense to humans, like what serving size is supposed to be but isn't".

Ideally we would have some way of, for each person, saying "drink some milk" and seeing how much they drank, and "eat some spinach" and seeing how much they ate, then compare the total amount of nutrients in each amount on a person by person basis.

I know this is not the correct meaning of nutrient dense, but I think it's more useful.

View more: Next