Comment author: Bundle_Gerbe 05 November 2014 10:35:05AM 2 points [-]

How about:

Specialization of Labor vs. Transaction/Communication costs: a trade off between having a task split between multiple people/organizations vs. done by a single person. Generalism vs.Specialization might be a more succinct way to put it.

Also, another pair that has a close connection is 3 and 7. Exploration is flexible strategy, since it leaves open resources to exploit better opportunities that turn up, while exploitation gains in commitment.

Comment author: ruelian 27 October 2014 05:13:24PM 8 points [-]

I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?

To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.

The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.

Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a proof consciously but solving potential problems that come up while writing it more intuitively. Do you also divide different types of thinking into separate processes, or use them together?)

The reason I'm asking is that I'm trying to transition to spending more of my time thinking about math not in a classroom setting and I need to figure out how I should go about it. The fast kind of thinking would be much more convenient, but it appears to have downsides that I haven't been able to study properly due to insufficient data.

Comment author: Bundle_Gerbe 28 October 2014 07:36:22AM 1 point [-]

As someone with a Ph.D. in math, I tend to think verbally in as much as I have words attached to the concepts I'm thinking about, but I never go so far as to internally vocalize the steps of the logic I'm following until I'm at the point of actually writing something down.

I think there is another much stronger distinction in mathematical thinking, which is formal vs. informal. This isn't the same distinction as verbal vs. nonverbal, for instance, formal thinking can involve manipulation of symbols and equations in addition to definitions and theorems, and I often do informal thinking by coming up with pretty explicitly verbal stories for what a theorem or definition means (though pictures are helpful too).

I personally lean heavily towards informal thinking, and I'd say that trying to come up with a story or picture for what each theorem or definition means as you are reading will help you a lot. This can be very hard sometimes. If you open a book or paper and aren't able to get anywhere when you try do this to the first chapter, it's a good sign that you are reading something too difficult for your current understanding of that particular field. At a high level of mastery of a particular subject, you can turn informal thinking into proofs and theorems, but the first step is to be able to create stories and pictures out of the theorems, proofs, and definitions you are reading.

Comment author: Eugine_Nier 03 December 2013 12:56:13AM 14 points [-]

The most important professions in the modern world may be the most reviled: advertiser, salesperson, lawyer, and financial trader. What these professions have in common is extending useful social interactions far beyond the tribe-sized groups we were evolved to inhabit (most often characterized by the Dunbar number). This commonly involves activities that fly in the face of our tribal moral instincts.

Nick Szabo

Comment author: Bundle_Gerbe 04 December 2013 02:47:36AM 8 points [-]

Interestingly, advertiser, lawyers, and financial traders all have in common that they are agents who play zero-sum or almost zero-sum games on behalf of someone. People who represent big interests in these games are compensated well, because of the logic of the game: so much is at stake that you want to have the best person representing you, so these people's services are bid up. But there is still the feeling that the game is wasteful, though perhaps unavoidably so.

Also, problematically for first sentence, I don't think many people would necessarily come up with the four professions named, especially "advertiser" and "salesperson", if asked to name the most important professions in the modern world, and some important professions, like "scientist", are widely valorized, while others, like "engineer", are at the least not reviled.

Comment author: Bundle_Gerbe 01 November 2013 11:26:39AM 18 points [-]

The theme of this book, then, must be the coming to consciousness of uncertain inference. The topic may be compared to, say, the history of visual perspective. Everyone can see in perspective, but it has been a difficult and long-drawn-out effort of humankind to become aware of the principles of perspective in order to take advantage of them and imitate nature. So it is with probability. Everyone can act so as to take a rough account of risk, but understanding the principles of probability and using them to improve performance is an immense task.

James Franklin, The Science of Conjecture: Evidence and Probability before Pascal

Comment author: JoshuaFox 31 October 2013 07:19:01PM 1 point [-]

Yes, creating this arbitrary dichotomy keeps lieutenants from fraternizing with enlisted men. But it doesn't keep Chief Master Sergeants from fraternizing with privates or generals from fraternizing with lieutenants. So, taken merely as a way to prevent fraternization across ranks, the dichotomy is of little value.

Comment author: Bundle_Gerbe 31 October 2013 08:00:59PM 4 points [-]

Well, just because the rule doesn't by itself prevent all possible cases of inappropriate cross-rank fraternization doesn't mean it has no value. There are other norms and practices that discourage generals from hanging out with lieutenants, e.g. generals usually get fancy lodging separate from the lieutenants. I suspect that cutting off lower-ranking officers from fraternizing with enlisted men prevents what would otherwise be one of the more common problematic cases.

If the military were even more concerned with this problem, it could have three or more groups instead of two, say, enlisted, officers and super-officers. But there are also tradeoffs to having more groupings, so the military sticks with two (part of this might be historically contingent, maybe three groups would work just as well but everyone is just copying the consensus choice of two).

Comment author: Bundle_Gerbe 31 October 2013 07:11:16PM *  3 points [-]

I think that in the military, the "no fraternizing with enlisted personnel" rule might be one reason why a hard separation is useful. This kind of rule requires a cutoff and can't easily be replaced with a rule like "no fraternizing with people of a rank three or more below your own." For instance, how would you set up the housing arrangements? Also, promotions would be awkward under this system, since you would always have a group of people you previously could fraternize with but no longer can.

Comment author: Bundle_Gerbe 20 September 2013 01:39:18AM 9 points [-]

I think the containment of the SARS epidemic in 2003 is a under-appreciated success story. SARS spread fairly easily and had a 9% mortality rate, so it could well have killed millions, but it was contained thanks to the WHO and to the quarantine efforts of various governments. Their wasn't much coverage in the vein of "hooray! one of the worst catastrophes in human history has been averted!" afterwards.

Comment author: Vladimir_Nesov 31 January 2013 04:01:48PM *  5 points [-]

Is there any situation that humanity could face that would make us collectively say "Yeah doing Y is right, even though it seems bad for us. But the sacrifice is too great, we aren't going to do it"

This is still probably not the question that you want to ask. Humans do incorrect things all the time, with excellent rationalizations, so "But the sacrifice is too great, we aren't going to do it" is not a particularly interesting specimen. To the extent that you think that "But the sacrifice is too great" is a relevant argument, you think that "Yeah doing Y is right" is potentially mistaken.

I guess the motivation for this post is in asking whether it is actually possible for a conclusion like that to be correct. I expect it might be, mainly because humans are not particularly optimized thingies, so it might be more valuable to use the atoms to make something else that's not significantly related to the individual humans. But again to emphasize the consequentialist issue: to the extent such judgment is correct, it's incorrect to oppose it; and to the extent it's correct to oppose it, the judgment is incorrect.

Comment author: Bundle_Gerbe 31 January 2013 10:45:53PM -1 points [-]

"But the sacrifice is too great" is a relevant argument, you think that "Yeah doing Y is right" is potentially mistaken.

I think I disagree with this. On a social and political level, the tendency to rationalize is so pervasive it would sound completely absurd to say "I agree that it would be morally correct to implement your policy but I advocate not doing it, because it will only help future generations, screw those guys." In practice, when people attempt to motivate each other in the political sphere to do something, it is always accompanied by the claim that doing that thing is morally right. But it is in principle possible to try to get people not to do something by arguing "hey this is really bad for us!" without arguing against it's moral rightness. This thought experiment is a case where this exact "lets grab the banana" position is supposed to be tempting.

Comment author: Andreas_Giger 31 January 2013 06:31:28AM *  1 point [-]

This is yet another poorly phrased, factually inaccurate post containing some unorthodox viewpoints that are unlikely to be taken seriously because people around here are vastly better at deconstructing others' arguments than fixing them for them.

Ignoring any formal and otherwise irrelevant errors such as what utilitarianism actually is, I'll try to address the crucial questions; both to make Bundle_Gerbe's viewpoints more accessible to LW members and also to make it more clear to him why they're not as obvious as he seems to think.

1: How does creating new life compare to preserving existing life in terms of utility or value?

Bundle_Gerbe seems to be of the view that they are of identical value. That's not a view I share, mostly because I don't assign any value to the creation of new life, but I must admit that I am somewhat confused (or undecided) about the value of existing human life, both in general and as a function of parameters such as remaining life expectancy. Maybe there's some kind of LW consensus I'm not aware of, but the whole issue seems like a matter of axioms to me rather than anything that could objectively be inferred from some sort of basic truth.

2: If creation of life has some positive value, does this value increase if creation is preponed?

Not a question relevant to me, but it seems that this would partly depend on whether earlier creation implied higher total amount of lives, or just earlier saturation, for example because humans live forever and ultimately the only constraints will be space. I'm not entirely certain I correctly understand Bundle_Gerbe's position on this, but it seems that his utility function is actually based on total lifetime as opposed to actual number of human lives, meaning that two humans existing for one second each would be equivalent to one human existing for two seconds. That's kind of an interesting approach with lots of implied questions, such as whether travelling at high speeds would reduce value because of relativistic effects.

3: Is sacrificing personal lifetime to increase total humanity lifetime value a good idea?

If your utility function is based on total humanity lifetime value, and you're completely altruistic, sure. Most people don't seem to be all that altruistic, though. If I had to choose between saving one or two human beings, I would choose the latter option, but I'd never sacrifice myself to save a measly two humans. I would be very suprised if CEV turned out to require my death after 20 years, and in fact I would immediately reclassify the FAI in question as UFAI. Sounds like an interesting setup for an SF story, though.

For what it's worth, I upvoted the post. Not because the case was particularly well presented, obviously, but because I think it's not completely uninteresting and because I perceived some of the comments such as Vladimir_Nesov's which got quite some upvotes as rather unfair.

That being said, the title is badly phrased while not being very relevant, either.

Comment author: Bundle_Gerbe 31 January 2013 10:18:05AM 2 points [-]

Thanks for this response. One comment about one of your main points: I agree that the tradeoff of number of humans vs. length of life is ambiguous. But to the extent our utility function favors numbers of people over total life span, that makes the second scenario more plausible, whereas if total life span is more important, the first is more plausible.

I agree with you that both the scenarios would be totally unacceptable to me personally, because of my limited altruism. I would badly want to stop it from happening, and I would oppose creating any AI that did it. But I disagree in that I can't say that any such AI is unfriendly or "evil". Maybe if I was less egoistic, and had a better capacity to understand the consequences, I really would feel the sacrifice was worth it.

Comment author: Vladimir_Nesov 31 January 2013 02:57:35AM *  21 points [-]

To the extent your question is, "Suppose X is the correct answer. Is X the correct answer?", X is the correct answer. Outside of that supposition it probably isn't.

Comment author: Bundle_Gerbe 31 January 2013 09:52:30AM 6 points [-]

I don't think that's what I'm asking. Here's an analogy. A person X comes to the conclusion fairly late in life that the morally best thing they can think of to do is to kill themselves in a way that looks like an accident and will their sizable life insurance policy to charity. This conclusion isn't a reducto ad absurdum of X's moral philosophy, even if X doesn't like it. Regardless of this particular example, it could presumably be correct for a person to sacrifice themselves in a way that doesn't feel heroic, isn't socially accepted, and doesn't save the whole world but maybe only a few far-away people. I think most people in such a situation (who managed not to rationalize the dilemma away) would probably not do it.

So I'm trying to envision the same situation for humanity as a whole. Is there any situation that humanity could face that would make us collectively say "Yeah doing Y is right, even though it seems bad for us. But the sacrifice is too great, we aren't going to do it". That is, if there's room for space between "considered morality" and "desires" for an individual, is there room for space between them for a species?

View more: Next