Comment author: [deleted] 13 April 2015 01:59:25PM 1 point [-]

I think the issue is not as much as unconsciously exploiting it, but more like the amount of pain felt depends on the absence or presence of "training". More here: http://lesswrong.com/lw/59i/offense_versus_harm_minimization/c8u7

Comment author: Kindly 13 April 2015 03:17:17PM -1 points [-]

Desensitization training is great if it (a) works and (b) is less bad than the problem it's meant to solve.

(I'm now imagining Alice and Carol's conversation: "So, alright, I'll turn my music down this time, but there's this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we'd both be happier.")

Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.

In response to On immortality
Comment author: ChaosMote 10 April 2015 10:53:33PM *  8 points [-]

Mathematician here. I wanted to agree with @pianoforte611 - just because you have infinite time doesn't mean that every event will repeat over and over.

For those interested in some reading, the general question is basically the question of Transience in Markov Chains; I also have some examples. :)

Let us say that we have a particle moving along a line. In each unit of time, it moves a unit of distance either left or right, with probability 1/10 of the former and 9/10 of the latter. How often can we expect the particle to have returned to its starting point? Well, to return to the origin, we must have moved left and right an equal number of times. At odd times, this is impossible; at time 2n, the probability of this is (this is not difficult to derive, and a simple explanation is given here). Summing this over all n, we get that the expected number of returns is one in four - in other words, we have no guarantee of returning even once, much less an infinite number times!

If this example strikes you as somewhat asymmetric, worry not - if the point was moving in three dimensions instead of one (so it could up, down, forward, or back as well as left or right), then a weighing of 1/6 to each direction means that you won't return to the starting point infinitely often. If you don't like having a fixed origin, use two particles, and have them moving independently in 3 dimensions. They will meet after time zero with less-than-unit-probability (actually, the same probability as in the previous problem, since the problems are equivalent after you apply a transformation).

I hope this helps!

In response to comment by ChaosMote on On immortality
Comment author: Kindly 10 April 2015 11:53:52PM 4 points [-]

What if we assume a finite universe instead? Contrary to what the post we're discussing might suggest, this actually makes recurrence more reasonable. To show that every state of a finite universe recurs infinitely often, we only need to know one thing: that every state of the universe can be eventually reached from every other state.

Is this plausible? I'm not sure. The first objection that comes to mind is entropy: if entropy always increases, then we can never get back to where we started. But I seem to recall a claim that entropy is a statistical law: it's not that it cannot decrease, but that it is extremely unlikely to do so. Extremely low probabilities do not frighten us here: if the universe is finite, then all such probabilities can be lower-bounded by some extremely tiny constant, which will eventually be defeated by infinite time.

But if the universe is infinite, this does not work: not even if the universe is merely potentially infinite, by which I mean that it can grow to an arbitrarily large finite size. This is already enough for the Markov chain in question to have infinitely many states, and my intuition tells me that in such a case it is almost certainly transient.

Comment author: Kindly 10 April 2015 04:06:12PM 3 points [-]

To Bob, I would point out that:

  1. Contrary to C, it is easy to prove that you have an ear or mental condition that makes you sensitive to noise; a note from a doctor or something suffices.

  2. Contrary to D, in case such a condition exists, "toughening up and growing a thicker skin" is not actually a possible response. In some cases, it appears that loud noises make the condition worse. Even when this is not the case, random exposure to noises at the whim of the environment doesn't help.

I realize that you are appealing to a metaphor, but I think that these points often apply to the unmetaphored things as well.

Comment author: [deleted] 06 April 2015 03:53:16PM 1 point [-]

Regarding criticism of my writing: inspired by Karl Popper, I seek out criticism that will help me be, well, less wrong.

Regarding argument by analogy: I do a bit, but I've seen worse. It has its place. I don't care for Harry Potter in the original, film or Yudkowsky versions. The later perhaps being slightly longer yet slightly more up-voted than my post.

Regarding my style: many philosophies have both a function and a form. In writing, some philosophies have a message to convey and a style that it is often conveyed in. There is a style to objectivist essays, Maoist essays, Buddhist essays, and often there is a style to less wrong essays. I wrote my egoist essay in the egoist style, in honor of those egoists who led to me including Max Stirner, Dora Marsden, Apio Ludd and especially Malfew Seklew. Egoism - it's not for everybody.

Regarding April Fools' Day: please believe whatever invokes the strongest response in you, be it positive or negative.

Regarding solicitation of payment for goods and services: I checked, and there is no policy against it. I also waited, over a year, to see if in the lack of a policy there was a trend or tradition regarding links to commercial services. The trend is that such posts exist, and whatever criticism they get is never based on including a commercial link. This is true be the link for the author's own good or service, or for a different good or service. A few examples: 1 2 3. The difference is the tolerated and celebrated commercial links, the ones that do not lower the general level of discourse, that are not downvoted - those links are or claim to be altruistic. Mine is selfish, fitting even for a failed egoist such as myself. I don't say that maybe if you pay me to tutor your kids then maybe they will learn something. I don't say that maybe if you donate to my think tank then maybe I'll save the world from hostile AI. I say, guaranteed and without controversy, that two things will happen to those who buy my book. First, they will get a book. Second, I will earn several dozen pennies in royalties. Altruistic maybe is okay, egoistic promises no? I like the game of Less Wrong and am happy to play by the rules. If commercial links are allowed, or limited, or banned, I can go with it or go away. But it'll be more fun for me if I get to play by the same rules as everyone else.

  • Trevor Blake is the author of a book. There is no such thing as a so-called "search engine" so don't even try to look for it.
In response to comment by [deleted] on If You Like This Orange...
Comment author: Kindly 06 April 2015 09:37:11PM 4 points [-]

Regarding my style: many philosophies have both a function and a form. In writing, some philosophies have a message to convey and a style that it is often conveyed in. There is a style to objectivist essays, Maoist essays, Buddhist essays, and often there is a style to less wrong essays. I wrote my egoist essay in the egoist style, in honor of those egoists who led to me including Max Stirner, Dora Marsden, Apio Ludd and especially Malfew Seklew. Egoism - it's not for everybody.

The things that make your writing style unapproachable are not features of "the egoist style", at least according to what my superficial inspection of "the egoist style" discovered. What makes your writing style unapproachable is the lack of indication you give of what you're trying to prove.

I decided to investigate the first name on your list, Max Stirner, who has the admirable character trait of being long dead and therefore available to read on Google Books for free. I skimmed the bit of The Ego and His Own which was under the heading "All Things are Nothing to Me". Here is what I found.

Stirner begins by saying "People want me to care about everything--God, country, and so on--except myself. Is this reasonable? Let us look at what God and country have to say about it." He then fulfills his promise by explaining, in the next few paragraphs, how those causes are selfish; addressing, in turn, "God", "country", and "and so on". He ends by giving his own answer to what he thinks he should care about.

You, on the other hand, begin with oranges. I follow along with this game for a few paragraphs, and eventually discover that you did not mean oranges when you said oranges. I considered re-reading those paragraphs to see what you did mean, but get bored and skip to the end, where you tell me that it's okay to like things I like. Well, okay. This doesn't seem like a controversial conclusion; if you were arguing for this all along, then maybe I was right to skip to the end. Maybe I skipped the bit where you explained how some people disagree, so I can believe that your conclusion is interesting. Oh well.

Stirner signposts. Stirner makes promises about what he will talk about and then keeps them. If I had been interested in engaging with the substance of Stirner, rather than his style, I would have read carefully the paragraphs where he explains why God's cause is a selfish cause. Not having done that, I can still point to those paragraphs, because Stirner told me where he would explain this. I can summarize Stirner's argument, not because I am good at summarizing, but because Stirner gave me several summaries.

If you don't tell me where you are and where you're going, I have no means or inclination to follow along with you.

Comment author: Quill_McGee 06 April 2015 06:56:06PM 0 points [-]

I was thinking of the "feeling bad and reconsider" meaning. That is, you don't want regret to occur, so if you are systematically regretting your actions it might be time to try something new. Now, perhaps you were acting optimally already and when you changed you got even /more/ regret, but in that case you just switch back.

Comment author: Kindly 06 April 2015 07:12:21PM 1 point [-]

That's true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes.

In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again.

I also worry that relying on regret to change your strategy is vulnerable to loss aversion and similar bugs in human reasoning. Betting and losing $100 feels much more bad than betting and winning $100 feels good, to the extent that we can compare them. If you let your regret of the outcome decide your strategy, then you end up teaching yourself to use this buggy feeling when you make decisions.

Comment author: Furslid 06 April 2015 04:55:24PM 2 points [-]

I actually like that line. There are a lot of people and organizations that are portrayed as rational and evil. Walmart sacrificing all soft values to maximize profit and the robot overlords systematically destroying or enslaving humanity are also views of rationality. They can be used as objections as much as Spock can. This quick joke shows that problems like this are considered, even if they aren't dealt with in depth here.

Comment author: Kindly 06 April 2015 06:03:26PM 6 points [-]

Part of it might just be the order. Compare that paragraph to the following alternative:

The rationality of Rationality: AI to Zombies isn't about using cold logic to choose what to care about. Reasoning well has little to do with what you're reasoning towards. If your goal is to annihilate as many puppies as possible, then this kind of rationality will help you annihilate more puppies. But if your goal is to enjoy life to the fullest and love without restraint, then better reasoning (while hot or cold, while rushed or relaxed) will also help you do so.

Comment author: TheOtherDave 06 April 2015 02:11:35AM 2 points [-]

It not only results in unavoidable regret, it sometimes results in regretting the correct choice.

Given a choice between "$5000 if I roll a 6, $0 if I roll between 1 and 5" and "$5000 if I roll between 1 and 5, $0 if I roll a 6," the correct choice is the latter. If I regret my choice simply because the die came up 6, I run the risk of not noticing that my conception of "the right thing" was correct, and making the wrong choice next time around.

Comment author: Kindly 06 April 2015 04:36:05AM 1 point [-]

I'm not sure that regretting correct choices is a terrible downside, depending on how you think of regret and its effects.

If regret is just "feeling bad", then you should just not feel bad for no reason. So don't regret anything. Yeah.

If regret is "feeling bad as negative reinforcement", then regretting things that are mistakes in hindsight (as opposed to correct choices that turned out bad) teaches you not to make such mistakes. Regretting all choices that led to bad outcomes hopefully will also teach this, if you correctly identify mistakes in hindsight, but this is a noisier (and slower) strategy.

If regret is "feeling bad, which makes you reconsider your strategy", then you should regret everything that leads to a bad outcome, whether or not you think you made a mistake, because that is the only kind of strategy that can lead you to identify new kinds of mistakes you might be making.

Comment author: [deleted] 31 March 2015 12:49:29PM *  5 points [-]

LW and related blogs are basically spoiling fantasy fiction to me. DAE have an experience like this? How to overcome it?

My formerly existing but weakly skeptical atheism and generic anti-supernaturalism got really strengthened here. I bought into the idea that the supernatural means the propositon that some mental things are are not reducible to nonmental things and from that it is only a small jump to say that mental things are entirely in the map, not in the terrain, it is a useful shorthand model to think of some things as mental but they are never irreducibly so in the terrain. So irreducibly mental things i.e. supernatural things are always, in principle, map-terrain mistakes. So we can on the map level think of medicine having healing properties, because the effect they have on a certain condition is what we put into a mental category of making us "healthier", but the medicine does not actually heal bodies, it just changes bodies. From this viewpoint, a Potion of Healing is map-terrain mistake, as it suggests a substance could have a real healing property. But healing is a mental property, a property of models, maps, not real things. You could say the same about a magic sword that has an bloodthirsty evil spirit in it. The real world has only change, certain things can effect certain changes, but it is entirely a mental model that we call that change helping, harming, healing, good, evil, cruel, nice, killing, purifying etc.

Sh1t, now it seems to me the single most important step from the medieval-alchemical world to the world of science was understanding the map-terrain problem! That a Philosopher's Stone (which does not simply turn lead to gold but improves everything) cannot exist in principle and not just empirically doesn't, because the idea of improvement itself is a mental category that does not exist in the terrain!

And now my beloved Dragonlance novels feel utterly stupid to me.

(Note: I haven't read HPMOR beyond the first few chapters, the conflation of the two worlds, rational and fantasy, made me feel uncomfortable and dizzy somehow.)

For fun, what is the worst fantasy or other fictional offender of mistaking mental phenomena for something essentially real? My proposal: the ideas that goodness or evil are substances and they can formed into magic objects such as sword made of pure evil. Not sure where I've read that but pretty sure some novels proposed something like that.

If you can recommend any further reading even if only tangentially relevant to what I wrote here I will be grateful. Am I no the first one to notice the all-improving Philosopher's Stone could not exist in principle because improvement is a mental category and not real, right?

In response to comment by [deleted] on Open thread, Apr. 01 - Apr. 05, 2015
Comment author: Kindly 03 April 2015 04:09:51PM *  0 points [-]

My proposal: the ideas that goodness or evil are substances and they can formed into magic objects such as sword made of pure evil.

Of course, some novels also subvert this delightfully. Patricia Wrede's The Seven Towers, for instance, is all about exactly what goes wrong when you try to make a magical object out of pure good.

(Edit: that is, Wrede does not literally spend the whole book talking about this problem. It is merely mentioned as backstory. But still.)

Comment author: seer 30 March 2015 07:35:06AM 4 points [-]

Replace "Killing Joe", with say "not giving Joe a million dollars" in that argument, what changes?

Comment author: Kindly 31 March 2015 08:46:51PM 1 point [-]

What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe's desire to live and my own, the latter would win.

In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe's friends would not be said if he died, then I am okay with Joe's death. So there is no other hidden factor that moves me.

I'm not sure what the observation that I do not give all of my money away to charity has to do with anything.

Comment author: seer 30 March 2015 02:42:31AM 3 points [-]

Most atheists do think that there something wrong with rape and murder.

The problem is they have a hard time saying what.

Comment author: Kindly 30 March 2015 04:35:38AM 1 point [-]

I don't think that's true in any important way.

I might say: "Killing Joe is bad because Joe would like not to be killed, and enjoys continuing to live. Also, Joe's friends would be sad if Joe died." This is not a sophisticated argument. If an atheist would have a hard time making it, it's only because one feels awkward making such an unsophisticated argument in a debate about morality.

View more: Prev | Next