Comment author: Romashka 30 March 2015 03:57:25PM 1 point [-]

How do people who want to live forever (or something like this) renormalize their everyday approach to life and relationships? I mean, children and parents move apart with each passing day (normally), and if you imagine living for at least a hundred years, how do you keep yourself interested in older links? It seems this would take a lot more effort.

Comment author: seer 07 April 2015 04:29:14AM 0 points [-]

I mean, children and parents move apart with each passing day (normally), and if you imagine living for at least a hundred years, how do you keep yourself interested in older links?

It depends on the details of how the immortality is achieved. This is related to the "would someone uploaded as a young child experience virtual puberty?" question.

Comment author: benkuhn 01 April 2015 01:56:15AM 3 points [-]

Is my general line of reasoning correct here, and is the style of reasoning a good style in the general case? I am aware that Eliezer raises points against "small probability multiplied by high impact" reasoning, but the fact is that a rational agent has to have a belief about the probability of any event, and inaction is itself a form of action that could be costly due to missing out on everything; privileging inaction is a good heuristic but only a moderately strong one.

Sometimes, especially in markets and other adversarial situations, inaction is secretly a way to avoid adverse selection.

Even if you're a well-calibrated agent--so that if you randomly pick 20 events with a 5% subjective probability, one of them will happen--the set "all events where someone else is willing to trade on odds more favorable than 5%" is not a random selection of events.

Whether the Bitcoin markets are efficient enough to worry about this is an open question, but it should at least be a signal for you to make your case more robust than pulling a 5% number out of thin air, before you invest. I think the Reddit commenters were reasonable (a sentence I did not expect to type) for pointing this out, albeit uncharitably.

Is "take the inverse of the size of the best-fitting reference class" a decent way of getting a first-order approximation? If not, why not? If yes, what are some heuristics for optimizing it?

In my experience, this simply shifts the debate to which reference class is the best-fitting one, aka reference-class tennis. For instance, a bitcoin detractor could argue that the reference class should also include Beanie Babies, Dutch tulips, and other similar stores of value.

Comment author: seer 01 April 2015 05:29:36AM 7 points [-]

For instance, a bitcoin detractor could argue that the reference class should also include Beanie Babies, Dutch tulips, and other similar stores of value.

The difference is that it's easy to make more tulips or Beanie Babies, but the maximum number of Bitcoins is fixed.

Comment author: Kindly 31 March 2015 08:46:51PM 1 point [-]

What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe's desire to live and my own, the latter would win.

In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe's friends would not be said if he died, then I am okay with Joe's death. So there is no other hidden factor that moves me.

I'm not sure what the observation that I do not give all of my money away to charity has to do with anything.

Comment author: seer 01 April 2015 03:02:35AM 4 points [-]

What changes is that I would like to have a million dollars as much as Joe would.

Um, what are you using to compare preferences across people.

Similarly, if I had to trade between Joe's desire to live and my own, the latter would win.

How about Joe's desire to live against you desire to not have him annoy you, or to have sex with his wife, or any number of other possible motives?

Comment author: gjm 31 March 2015 09:01:31AM 4 points [-]

I'm claiming neither Kindly nor you actually believe the argument you've given.

Your overconfidence in your mind-reading abilities is noted.

Except you're not doing that [...]

The fact that someone doesn't act as a perfect utility maximizer doesn't mean that utility gains aren't worth seeking, for them out for others. If you ask "why did you buy that thing?" and I say I bought it because it was half the price of the alternative, am I refuted if you point out that I don't always buy the cheapest things I can?

As I said: a reason, not the only possible reason.

Comment author: seer 01 April 2015 02:59:48AM 4 points [-]

How do you distinguish the part of your ethics that you ignore in practice, e.g., not giving all your money to charity, from the part you insist you and everybody follow, e.g., not killing Joe even though he's being really really annoying.

Comment author: TheAncientGeek 31 March 2015 09:02:28AM *  1 point [-]

The question was specifically about demotic dictatorships.

No kind of dictator has to generate democratic support. Demotic dictators are supposed to justify themselves by generating ideological support, but that doesn't actually distinguish them from real world monarchies, because of all the ideology about God Put Me on the Throne,

Monarchs had a lot less dissent to quash. For example, the dress code at Versailles required all men to carry swords.

OTOH, the Star Chamber.

Comment author: seer 01 April 2015 02:28:39AM 7 points [-]

Demotic dictators are supposed to justify themselves by generating ideological support, but that doesn't actually distinguish them from real world monarchies, because of all the ideology about God Put Me on the Throne,

"The People Support Me" is a lot easier to falsify then "God Put Me on the Throne", thus you need correspondingly more oppression to keep anyone from falsifying it.

Comment author: [deleted] 31 March 2015 07:16:01AM *  2 points [-]

I am seriously weirded out by this discussion... how is it hard to understand conditions change? One of weirdest aspect of NRx is the complete lack of cultural conservatism - by that I mean the largely politics-independent changing of mores, atittudes, the kind of stuff e.g. Theodore Dalrymple bemoan. That political institutions require a culture that is compatible with them. Engaging in from-the-above system-building as if society was a computer and a political system a program, an algorithm, just find the right one and it gets executed. This social-engineering attitude. Where does this come from? I mean, how is it hard to see there are cultural conditions as prerequisites and indeed the same way democracy does not work well for tribal societies in Africa, the same way monarchies cannot work well in societies where everybody's minds are full of ideas that were received from radical intellectuals? How is it hard to see how different cultural conditions were: those monarchies required that the population be religious and see the monarch as divine ordained. It also required that populations should be fairly uneducated and thus not influenced by radical intellectualism. It required the lack of widespread literay, fairly expensive book printing and distributing technology that does not deliver seditious flyers into the hands of cobblers and so on.

What weirds me out here is the general engineering attitude that systems of politics are primary and culture is at best secondary. Where does this come from? A bunch of programmers and engineers who have little respect for the humanities and incredible power education and the written word has on human minds?

Systems are absolutely secondary to culture, to me - I am mostly humanities oriented and suck at math, and my programming is largely just scripting so I am no hacker - this is more than obvious. For example the reason France is still a more or less rich and functional country is the other France: that everything that was invented there in politics did not have much effect beyond Paris, that e.g. Catholic peasants of Gascogne lived a largely politics-free existence where their lives were mainly determined by cultural norms (work, pray, marry, work, work even more, pray, die) and politics and government was a remote thing one occasionally pays taxes to but is not relevant to daily life. They don't even talk the same way (oc/oil languages). And despite all the bullshit from Paris France works largely because these rural cultural norms were effective. Politics could not make them worse. But they also cannot make them better. If cultural norms are bad, you cannot build a good political system. If they are good, it takes a lot of effort for a bad system to ruin it. I am not saying culture is non-reducible, but certainly as hell non-reducible to politics. To other factors maybe. Politics is 100% culture-reducible, culture determines even what political concepts mean.

In short, I find it a huge blind spot in NRx to engage in systems-building and consider culture only an afterthought.

Why, with a good enough culture you could basically afford to be anarchist and not worry about political systems at all!! That was roughly Tolkien's idea. The Shire hardly needed any government at all, because their cultural norms were productive and peaceful and honest. THIS is a huge lesson you guys totally don't understand, apparently.

Comment author: seer 01 April 2015 02:20:21AM *  7 points [-]

One of weirdest aspect of NRx is the complete lack of cultural conservatism - by that I mean the largely politics-independent changing of mores, atittudes, the kind of stuff e.g. Theodore Dalrymple bemoan.

Um, those changes are not politics independent. These changes are being caused by various political forces.

Politics is 100% culture-reducible, culture determines even what political concepts mean.

And where does culture come from?

Comment author: irrational_crank 30 March 2015 09:17:31AM 0 points [-]

Even if the atheist was a moral nihilist (of course he is conflating atheism and nihilism), it still would not be rational to carry out the action because we would hope that society's condemnation from people with moral systems and appropriate deterrents (e.g the risk of getting caught and getting a life prison sentence) so even saying that moral nihilism will lead to mass murder is wrong, so long as a sufficiently large percentage of the population believe in consistent and sensible moral systems. The moral nihilist would also have to overcome his brain's normal revulsion against killing people which the effort and guilt to do so would probably outweigh the utility gained from doing the murder, so to say moral nihilism leads to murder is a non sequitur.

I also agree that although it can be useful in discussions with people you know are rational to choose extreme examples as a "least convenient world" example, it can be mind-killing for those not sufficiently trained. Certainly that is what has happened to the media in this example, who have focused on the other views and motives of the arguer rather than the content of the argument, which has many flaws.

Comment author: seer 31 March 2015 05:17:09AM 4 points [-]

Even if the atheist was a moral nihilist (of course he is conflating atheism and nihilism), it still would not be rational to carry out the action because we would hope that society's condemnation from people with moral systems and appropriate deterrents (e.g the risk of getting caught and getting a life prison sentence) so even saying that moral nihilism will lead to mass murder is wrong, so long as a sufficiently large percentage of the population believe in consistent and sensible moral systems.

That's an argument against promoting moral nihilism.

Comment author: [deleted] 30 March 2015 12:58:24PM 1 point [-]

Intuition. Terminal values.

Comment author: seer 31 March 2015 05:10:59AM 5 points [-]

You'd be amazed what can seem intuitive when you find yourself in a situation where it would be really convenient for Joe to die.

Comment author: WinterShaker 30 March 2015 10:44:11AM 2 points [-]

A million dollars is a lot more zero-sum than not killing someone - if I give you a million dollars I lose a million dollars. To make the analogy more accurate, you'd need to stipulate that Joe will kill me if I don't kill him.

Also, I don't think it's fair to ignore the fact that for most people, not killing someone is vastly easier to do at non-self-destructive costs. I appreciate that this is a quantitative argument rather than a categorical counterargument, but if we have atheists who base their sense of morality on a vague consequentialism that they can't quite fully articulate, that's still no worse than Robertson's (presumed) divine command theory, and they should be able to make such such arguments without being accused of hypocrisy for not also advocating actions that <i>would</i> score much worse under their vague consequentialism.

Comment author: seer 31 March 2015 05:09:36AM 6 points [-]

A million dollars is a lot more zero-sum than not killing someone - if I give you a million dollars I lose a million dollars. To make the analogy more accurate, you'd need to stipulate that Joe will kill me if I don't kill him.

No, just that you'll get some benefit from killing him, e.g., you get to have sex with his wife.

Comment author: gjm 30 March 2015 09:33:36AM 3 points [-]

Does anything need to?

I guess you're worried that if the same argument works in both cases then you might end up obliged to give Joe $1M. But those reasons why you should give Joe the money have exactly parallel reasons why you should keep it, and to zeroth order they all cancel out, so no such obligation.

If you look with a bit more detail, then the reasons might be stronger one way than the other; for instance, if you are quite rich and Joe is quite poor, he might benefit more from the money than you would. We don't generally have norms saying you should give him the money in this case for all sorts of good reasons, but instead we have taxation (compulsory) and charity (optional) which end up having an effect a bit like saying that rich people should give some of their money to much poorer people.

In typical cases, (1) if you give Joe a $1M then your loss will be bigger than Joe's gain, so even aside from other considerations you probably shouldn't, and (2) if you kill Joe then Joe's loss will be bigger than your gain, so even aside from other considerations you probably shouldn't. So the simple-minded "do whatever makes people happiest" principle (a.k.a. total utilitarianism, but you don't have to be a total utilitarian for this to be a reason, as opposed to the only possible reason, for doing something) gives the "right" answers in most cases.

Comment author: seer 31 March 2015 05:07:57AM 5 points [-]

I guess you're worried that if the same argument works in both cases then you might end up obliged to give Joe $1M.

No, I'm claiming neither Kindly nor you actually believe the argument you've given.

So the simple-minded "do whatever makes people happiest" principle (a.k.a. total utilitarianism, but you don't have to be a total utilitarian for this to be a reason, as opposed to the only possible reason, for doing something) gives the "right" answers in most cases.

Except, you're not doing that, i.e., you're not giving all your income to charity. So since you're willing to ignore parts of your ethics when its inconvenient, why not also ignore the parts about not killing Joe when it would be convenient were Joe to die.

View more: Next