Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: adamzerner 04 April 2014 03:33:57AM *  2 points [-]

I just finished reading Eliezer's April Fools Day post, where he illustrated how good society could be. A future society filled with rational people, that is structured the way Eliezer describes, and continues with linear progression in technology would be pretty amazing. What is it that the intelligence explosion would provide of value that this society wouldn't?

Put differently, diff(intelligenceExplosion, dath ilan).

Comment author: shokwave 04 April 2014 09:59:30AM 1 point [-]

Well, in dath ilan, people do still die, even though they're routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).

Comment author: JQuinton 31 March 2014 04:20:00PM 0 points [-]

That's actually a good question. Without disclosing too much of her psych history, she seems to be really impulsive and might even be prone to addiction. I suppose she could get an exercise disorder... this makes it even more complicated than I thought.

Comment author: shokwave 31 March 2014 05:06:49PM 1 point [-]

I'd caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).

Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).

There are a few approaches that might work for different people:

  • Talk as though she doesn't have anorexia. Since you are aware, you can tailor your message to avoid saying anything seriously upsetting (i.e you can present the diet assuming control of diet is easy, or assuming control of diet is hard). I don't recommend this approach.
  • Confront the issue directly ("Exercise is what tells your body to grow muscle, but food is what muscles are actually built out of, so without a caloric surplus your progress will be slow. I'm aware that this is probably a much harder challenge for you than most people..."). I don't recommend this approach.
  • Ask her how she feels about discussing diet. ("Do you feel comfortable discussing diet with me? Feel free to say no. Also, don't feel constrained by your answer to this question; if later you start wishing you'd said no, just say that, and I'll stop."). I recommend this approach.

In any case, make it clear from the outset you want to be respectful about it.

Comment author: shokwave 21 March 2014 05:38:04AM *  -1 points [-]

It seems like the War on Terror, etc, are not actually about prevention, but about "cures".

Some drug addiction epidemic or terrorist attack happens. Instead of it being treated as an isolated disaster like a flood, which we should (but don't) invest in preventing in the future, it gets described as an ongoing War which we need to win. This puts it firmly in the "ongoing disaster we need to cure" camp, and so cost is no object.

I wonder if the reason there appears to be a contradiction is just that some policy-makers take prevention-type measures and create a framing of "ongoing disaster" around it, to make it look like a cure (and also to get it done).

Comment author: shokwave 18 March 2014 04:08:23PM *  1 point [-]

One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.

This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.

If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed - at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).

So we don't know if an action is good or not until it's in the past. But we can only choose future actions! What's a consequentialist to do? (Equivalently, since we don't know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.

The next step is to say that we want as many good things to happen as possible, so "expected value calculations" is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and "wishful thinking" is an incorrect way of making decisions.

So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.

The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person's decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.


I think it's very important for consequentialists to always remember that an action's moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.

If you let it collapse - if you take the shortcut and say "making the best decision you could is all you can do", then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes' theorem needs to go out the window.

Comment author: CellBioGuy 12 March 2014 02:03:05PM *  5 points [-]

Irrationality game:

There are other 'technological civilizations' (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%

There are other 'technological civilizations' in our own galaxy: 75% with most of the probability mass in regimes where there are somewhere between dozens and thousands.

Conditional on these existing: Despite some being very old, they are limited by the hostile nature of the universe and the realities of practical manipulation of matter and energy to never controlling much matter outside the surfaces of life-bearing worlds, and either never leave their solar systems of origin with anything self-replicating or their replicators on average produce less than 1 seed to continue. 95%

Humanity has already received and recorded a radio signal from another thing-analogous-to-a-technological-civilization. This was either unnoticed or not unequivocally recognized as such due to some combination of very short duration, being a one-off event that was never repeated, being modulated in a way that the receiver was not looking for, or being indistinguishable from terrestrial radio noise. 20%.

Conditional on the above, the "Wow!" signal was such a signal. 20%.

Comment author: shokwave 13 March 2014 04:25:46PM 0 points [-]

Even given other technological civilisations existing, putting "matter and energy manipulation tops out a little above our current cutting edge" at 5% is way off.

Comment author: alicey 01 March 2014 04:28:32PM *  4 points [-]

i tend to express ideas tersely, which seems to count as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me

i have mostly stopped posting or commenting on lesswrong and stackexchange because of this

like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."

revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)

Comment author: shokwave 03 March 2014 05:16:35AM 5 points [-]

so they round me off to the nearest cliche

I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.

For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".

Comment author: Pablo_Stafforini 28 February 2014 05:17:41PM *  0 points [-]

I think that for the purposes of assessing the claim in question ("Eggs and whole milk are very nutrient dense"), unfortified versions of those foods should be considered. Otherwise, we should also regard cereals and many other foods as "very nutrient dense", simply because manufacturers decide to fortify them in all sorts of ways. (And I note that it's generally not a good idea to obtain your nutrients from supplements when you can obtain them from real food instead.)

In any case, even if we used data for fortified milk, it would still be false, in my opinion, that "whole milk is very nutrient dense." Vitamin D levels make a minor contribution to overall nutritional density.

Comment author: shokwave 02 March 2014 12:26:31AM *  0 points [-]

I suspect the real issue is using the "nutrients per calorie" meaning of nutrient dense, rather than interpreting it as "nutrients per some measure of food amount that makes intuitive sense to humans, like what serving size is supposed to be but isn't".

Ideally we would have some way of, for each person, saying "drink some milk" and seeing how much they drank, and "eat some spinach" and seeing how much they ate, then compare the total amount of nutrients in each amount on a person by person basis.

I know this is not the correct meaning of nutrient dense, but I think it's more useful.

Comment author: Vaniver 28 February 2014 05:06:47AM *  4 points [-]

I don’t care if you start with an exercise habit of one pushup a week, but you must do something.

Beeminder Beeminder Beeminder. Having an email reminder to exercise, and a penalty for not doing so, has been tremendously helpful for me- I now actually lift weights three times a week, as compared to just when I remembered to do so on my own.

Comment author: shokwave 28 February 2014 07:04:26AM 3 points [-]

Counterpoint: Beeminder does not play nice with certain types of motivation structures. I advocated it in the past; I do not anymore. It's probably not true for you, the reader (you should still go and use it, the upside is way bigger than the downside), but be aware that it's possible it won't work for you.

Comment author: jaibot 24 February 2014 08:03:50PM 5 points [-]

It looks like all the participants are consequentialists in good standing. The argument is over whose model of the world more accurately predicts consequences.

Comment author: shokwave 26 February 2014 04:00:30PM 1 point [-]

I mentioned on Slate Star Codex as well, it seems like if you let consequentialists predict the second-order consequences of their actions they strike violence and deceit off the list of useful tactics, in much the same way that a consequentialist doctor doesn't slaughter the healthy traveler for the organ transplants to save five patients, because the consequentialist doctor knows the consequence of destroying trust in the medical establishment is a worse consequence.

Comment author: Viliam_Bur 25 February 2014 02:33:14PM *  19 points [-]

A part which seems missing in the discourse -- probably because of politeness or strategy -- is that there are more than two sides, and that people on your side don't necessarily share all your values. When someone tells you: "Harry, look how rational I am; now do the rational thing and follow me in my quest to maximize my utility function!" it may be appropriate to respond: "Professor Quirrell, I have no doubts about your superb rationalist skills, but I'd rather use my own strategy to maximize my utility function." Your partner doesn't have to be literally Voldemort; mere corrupted hardware will do the job.

On the battlefield, some people share the common goal, and some people just enjoy fighting. Attacking the enemy makes both of them happy, but not for the same reasons. The latter will always advocate violence as the best strategy for reaching the goal. (The same thing happens on the other side, too.)

And an imporant part of the civilizing process Scott described is recognizing that both your side and the other side are in a constant risk of being hijacked by people who derive their benefits from fighting itself, and who may actually be more similar to their counterparts than they are to you. And that miraculous behavior which shouldn't happen and seems like a losing strategy, is actually the civilized people from the both sides half-knowingly forging a fragile treaty with each other against their militant allies and leaders.

Which feels like a treason... because it is! It is recognizing that there is some important value other than the official axis of the conflict, and that this value should be preserved, sometimes even at cost of some losses in the battlefield! -- This is what it means to have more than one value in your utility function. If you are not willing to sacrifice even epsilon of one value to a huge amount of the other value, then the other value simply does not exist in your utility function.

So, officially there is a battle between X and Y, and secretly there is a battle between X1 and X2 (and Y1 and Y2 on the other side). And people from X1 and X2 keep rationalizing about why their approach is the best strategy for the true victory of X against Y (and vice versa on the other side).

Civilization is a tacit conspiracy of decent people against psychopaths and otherwise defective or corrupted people. Whenever we try to make it explicit, it's too easy for someone to come and start yelling that X is the side of all decent people, and Y is the side of psychopaths, and this is why we from X have to fight dirty, silence the heretics in our own ranks and then crush the opponents. So we stay quiet amidst the yelling, and then we ignore it and secretly do the right thing; hoping that the part of conspiracy on the other side is still alive and ready to reciprocate. Sometimes it works, sometimes it doesn't; but on average we seem to be winning. And I wouldn't trade it for a "rationalist" pat on shoulder from someone I don't trust.

Comment author: shokwave 26 February 2014 03:56:44PM 0 points [-]

So, officially there is a battle between X and Y, and secretly there is a battle between X1 and X2 (and Y1 and Y2 on the other side). And people from X1 and X2 keep rationalizing about why their approach is the best strategy for the true victory of X against Y (and vice versa on the other side).

This part doesn't make clear enough the observation that X2 and Y2 are cooperating, across enemy lines, to weaken X1 and Y1. 2 being politeness and community, and 1 being psychopathy and violence.

View more: Next