Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: PeterisP 29 November 2016 11:54:09PM *  5 points [-]

I'm going to go out and state that the chosen example of "middle school students should wear uniforms" fails the prerequisite of "Confidence in the existence of objective truth", as do many (most?) "should" statements.

I strongly believe that there is no objectively true answer to the question "middle school students should wear uniforms", as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the "should" means.

For example, "A policy requiring middle school students to wear uniforms is beneficial to the students" is a valid topic of discussion that can uncover some truth, and "A policy requiring middle school students to wear uniforms is mostly beneficial to [my definition of] society" is a completely different topic of discussion that likely can result in a different or even opposite answer.

Talking about unqualified "should" statements are a common trap that prevents reaching a common understanding and exploring the truth. At the very least, you should clearly distinguish between "should" as good, informed advice from "should" as a categorical moral imperative. If you want to discuss if "X should to Y" in the sense of discussing what are the advantages of doing Y (or not), then you should (see what I'm doing here?) convert them to statements in the form "X should do Y because that's a dominant/better/optimal choice that benefits them", because otherwise you won't get what you want but just an argument between a camp arguing this question versus a camp arguing about why we should/shouldn't force X to do Y because everyone else wants it.

Comment author: fortyeridania 02 March 2015 09:58:35PM *  7 points [-]

Overestimating can be costly too. That's why bluffing can work, in poker as in war.


Comment author: PeterisP 03 March 2015 09:52:34AM -1 points [-]

The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.

Comment author: Vaniver 02 March 2015 02:11:28PM *  16 points [-]

Why should the battle lines be drawn in terms of conclusions?

Suppose I agree with someone's conclusion, and disagree with them on the method used to reach that conclusion. Are we political allies, or enemies? That is, of course "politics" is the answer to 'why should the battle lines be drawn this way?'

Now, for Tyler as a pundit, the answer is different. Staying in an intellectual realm where he thinks like the other people around him makes it so any disagreements are interesting and intelligible.

Comment author: PeterisP 03 March 2015 09:48:35AM 4 points [-]

"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.

For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won't go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences.

On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly - by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That's a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being "aligned" with your conclusions.

And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.

In response to comment by timujin on On Caring
Comment author: RichardKennaway 12 October 2014 07:11:00PM 4 points [-]

Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits.

What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?

In response to comment by RichardKennaway on On Caring
Comment author: PeterisP 15 October 2014 04:07:25PM 5 points [-]

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

In response to On Caring
Comment author: PeterisP 15 October 2014 04:01:23PM *  1 point [-]

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in the world cannot be totally worth more than 15 million care-minutes - simply because there aren't any more of them to allocate. And in a fair allocation, the average suffering person 'deserves' 0.1 care-minutes of my time, assuming that I don't leave anything at all for the oiled birds. This is a very different meaning of 'deserve' than the one used in the post - but I'm afraid that this is the more meaningful one.

Comment author: Benito 29 August 2014 06:50:47PM 1 point [-]

Let's make the further assumption that our common ancestor with dolphins was dumber than the modern octopus. This doesn't seem a stretch seeing how intelligent the modern octopus can be, how minor in terms of ecological role the common dolphin-human ancestor must have been, and seeing the stupidity of many of the descendants of that common ancestor.

Could you expand on the second point, as to why it must have a minor ecological role, and why this means it would be dumber? I know little evolutionary theory, and would appreciate the explanation. Cheers.

Comment author: PeterisP 04 September 2014 09:50:27PM 3 points [-]

I'd read it as an acknowledgement that any intelligence has a cost, and if your food is passive instead of antagonistic, then it's inefficient (and thus very unlikely) to put such resources into outsmarting it.

Comment author: [deleted] 30 August 2014 02:43:11PM 7 points [-]

Why? Having dabbled a bit in evolutionary simulations, I find that, once you have unicellular organisms, the emergence of cooperation between them is only a matter of time, and from there multicellulars form and cell specialization based on division of labor begins. Once you have a dedicated organism-wide communication subsystem, why would it be unlikely for a centralized command structure to evolve?

On Earth multicellularity arose independently several dozen times but AFAIK only animals have anything like a central nervous system.

Comment author: PeterisP 04 September 2014 09:46:39PM 2 points [-]

If animal-complexity CNS is your criteria, then humans + octopuses would be a counterexample, as urbilaterals wouldn't be expected to have such a system, and the octopus intelligence has formed separately.

Comment author: pinyaka 03 September 2014 10:57:10PM 3 points [-]

An AI should only avoid wasting energy if energy were a resource that was limiting the maximization of their utility function. If you're a gold-ingot-manufacturing-maximizer, you don't need all the energy available from your star because there isn't enough gold to use it. Even if you're converting everything that isn't a gold ingot in your system into seeds for turning other star systems into gold ingot factories, it's not obvious (to me at least) that you need all the energy available to do that.

Comment author: PeterisP 04 September 2014 09:38:59PM *  6 points [-]

A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.

Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, so sending N copies wouldn't be maximizing.

And a third point is that if it's possible that somewhere in the universe there are some ugly bags of mostly water that prefer to use their atoms and energy for not manufacturing gold ingots but their survival; then it's very important to ensure that they don't grow strong enough to prevent you from maximizing gold ingot manufacturing. Speed is of the essence, you must reach them before it's too late, or gold ingot manufacture won't get maximized.

Comment author: CCC 03 September 2014 10:02:26AM 9 points [-]

You make a very compelling argument, and on balance I think that you are probably correct in your conclusions.

Part of it may be because, for a land animal, the ground is always there. There's always a strong probability of a rock at your feet to pick up. For sea creatures, it's possible (in theory) to wander around for months without seeing another solid object. So, land animals have less space to move about in, but have an easier time finding simple tools.

This, of course, relies on the idea that tools - unliving lumps of matter used for a purpose - are a necessary component of a civilisation. It goes without saying that tools are a necessary component of our civilisation; but are they a necessary component of all possible civilisations?

The theoretical underwater civilisation has one thing in great abundance - space. The oceans cover three-quarters of our planet, and sea creatures can move up and down easily enough. Is there any way that that space can be used, as a foundation for some form of aquatic civilisation?

Thinking about bubble netting - it should be possible for dolphins to practice a form of agriculture, herding and taming schools of edible fish, much like shepherds. (I believe ants do something similar with aphids, and I'm pretty sure a dolphin is more intelligent than an ant). Once one has shepherds, one can easily move towards the idea of breeding fish for a purpose - breeding big fish with big fish to get bigger fish, for example. Or breeding tasty with tasty to get tastier. There's certainly space in the oceans for the dolphins to create a lot of fish farms... and then for these fish farms to swap and interbreed particularly interesting lines.

I'm not quite sure how to believably get beyond a basic agricultural/nomadic existence, though. (Unless perhaps the dolphins start breeding intelligent octopi with intelligent octopi to get more intelligent octopi or something along those lines).

Comment author: PeterisP 04 September 2014 08:53:00PM *  5 points [-]

Dolphins are able to herd schools of fish, cooperating to keep a 'ball' of fish together for a long time while feeding from it.

However, taming and sustained breeding is a long way from herding behavior - it requires long term planning for multi-year time periods, and I'm not sure if that has been observed in dolphins.

Comment author: Zubon 23 November 2013 01:17:02AM 27 points [-]

I hereby take part in the tradition and note that the tradition makes the following moot for relatively low levels of karma. You may round off your karma score if you want to be less identifiable. If your karma score is 15000 or above, you may put 15000 if you want to be less identifiable.

Income question: needs to specify individual or household. You may also want to specify sources, such as whether to include government aid, only include income from wages, or separate boxes for different categories of income.

I have done professional survey design and am available to assist with reviewing the phrasing of questions for surveys, here or on other projects.

Comment author: PeterisP 23 November 2013 07:12:53AM 10 points [-]

Income question needs to be explicit about if it's pre-tax or post-tax, since it's a huge difference, and the "default measurement" differs between cultures, in some places "I earn X" means pre-tax and in some places it means post-tax.

View more: Next