All of Multipartite's Comments + Replies

Running through this to check that my wetware handles it consistently.

Paying -100 if asked:

When the coin is flipped, one's probability branch splits into a 0.5 of oneself in the 'simulation' branch, 0.5 in the 'real' branch. For the 0.5 in the real branch, upon awaking a subjective 50% probability that on either of the two possible days, both of which will be woken on. So, 0.5 of the time waking in simulation, 0.25 waking in real 1, 0.25 waking in real 2.

0.5 x (260) + 0.25 x (-100) + 0.25 x (-100) = 80. However, this is the expected cash-balance change... (read more)

(For thoroughness, noting that the other approach was also wondered about a little earlier. Surface action is an alternative to look at if projectile-launching would definitely be ineffective, but if the projectile approach would in fact be better then there'd no reason not to focus on it instead.)

A fair point. On the subject of pulling vast quantities of energy from nowhere, does any one country currently possess the knowledge and materials to build a bomb that detonated on the surface could {split the Earth like a grape}/{smash the Earth like an egg}/{dramatic verb the Earth like a metaphorical noun}?

And yes, not something to try in practice with an inhabited location. Perhaps a computer model, at most... actually, there's a thought regarding morbid fascination. I wonder what would be necessary to provide a sufficiently-realistic (uninhabite... (read more)

0MartinB
Doubtful. Breaking the earth up is hard. The biggest explosion ever made is this one: http://en.wikipedia.org/wiki/Czar_bomb
1gwern
Can we? Probably not, there don't seem to be enough fissiles available: http://www.coarsegra.in/?p=95 There's also scale issues at play - as your bomb gets larger and larger, relatively more of its energy escapes into space and isn't directed into the ground.

Not directly related, but an easier question: Do we currently have the technology to launch projectiles out of Earth's atmosphere into a path such that, in a year's time or so, the planet smashes into them from the other direction and sustains significant damage?

(Ignoring questions of targeting specific points, just the question of whether it's possible to arrange that without the projectiles falling into the sun or just following us eternally without being struck or getting caught in our gravity well too soon... hmm, if we could somehow put it into an o... (read more)

1MichaelAnissimov
You'd probably have to use a more powerful kind of rocket than any that currently exists, like a nuclear rocket, to launch enough mass into space for it to cause "significant damage" upon reentry.
4MartinB
Any kinetic energy an object has, it has to get first. If you compare the size of satellites with their respective rocket it looks difficult to make an object of any reasonable mass get any significant speed. You can trick a bit with swing by maneuvers, but as far as I understand no man made object makes any more than a little sound at the atmosphere while entering. You could however poison the planet with a nice substance. On the other hand it might be possible to use a man made satellite to deflect a bigger object so that it crashes into earth. But please do not try this on your home.

()

In practice, this seems to break down at a specific point: this can be outlined, for instance, with the hypothetical stipulation "...and possesses the technology or similar power to cross universe boundaries and appear visible before me in my room, and will do so in exactly ten seconds.".

As with the fallacy of a certain ontological argument, the imagination/definition of something does not make it existential, and even if a certain concept contains no apparent inherent logical impossibilities that still does not mean that there could/would exi... (read more)

0Anubhav
Congratulations, you have discovered that most philosophy isn't worth the paper it's written on.

(Absent(?) thought after reading: one can imagine someone, through a brain-scanner or similar, controlling a robot remotely. One can utter, through the robot, "I'm not actually here.", where 'here' is where one is doing the uttering through the robot, and 'I' (specifically 'where I am') is the location of one's brain. The distinction between the claim 'I'm not actually here' and 'I'm not actually where I am' is notable. Ahh, the usefulness of technology. For belated communication, the part about intention is indeed significant, as with whether a diary is written in the present tense (time of writing) or in the past tense ('by the time you read this[ I will have]'...).) enjoyed the approach

To ask the main question that the first link brings to mind: What prevents a person from paying both a life insurance company and a longevity insurance company (possible the same company) relatively-small amounts of money each in exchange for either a relatively-large payout from the life insurance if the person dies early and a relatively-large payout from the longevity insurance if the person dies late?

To extend, what prevents a hypothetically large number of people to on average create this effect (even if each is disallowed from having both instead of just one or the other) and so creating a guaranteed total loss overall on the part of an insurance company?

0asr
I assume that the insurance company won't sell a policy that is unfavorable to them in expectation. The way insurance companies make money is to set their rates so that they win on average. If you buy both life insurance and longevity insurance, you'll find that the payments you put in exceed the value of the payout, at least in expectation. Put another way: you're dutch-booking yourself, not them. Or have I missed a nuance here?
1gwern
Well, nothing, I would imagine; but keep in mind you are locking away a lot of money for a very long period of time, and the payouts are constantly adjusted with age - so unless the companies are outright screwing up and allowing an arbitrage opportunity, you reduce your expected returns either through not getting as much 'relatively' as you seem to expect or by opportunity cost (the companies returning your money, but having profited off the 'float' - which is how insurance companies have long been able to pay out 'more' than they should, because they made their profit off investing your money and not taking a percentage of your premiums).

To answer the earlier question, an alteration which halved the probability of failure would indeed change an exactly-0% probability of success into a 50% probability of success.

If one is choosing between lower increases for higher values, unchanged increases for higher values, and greater increases for higher values, then the first has the advantage of not quickly giving numbers over 100%. I note though that the opposite effect (such as hexing a foe?) would require halving the probability of success instead of doubling the probability of failure.

The eff... (read more)

2DanielLC
The simplest way is to use odds ratios instead of log probability. 5% is 1:19. Multiply that by 2:1 and you get 2:19 which corresponds to 9.52%. If it's close to 100%, you get close to half the probability of failure. If it's close to 0%, you get close to double the probability of success. This can be done with dice by using a virtual d21. You can do that by rolling a higher-numbered die and re-rolling if you pass 21. Since the next die up is d100, you can combine two dice to get d24 or d30 the same way you combine two d10s to get a d100. Alternately, use a computer or a graphing calculator instead of a die, and you can have it give whatever probabilities you want.

For what it's worth, I'm reminded of systems which handle modifiers (multiplicatively) according to the chance of failure:

[quote]

For example, the first 20 INT increases magic accuracy from 80% to

(80% + (100% - 80%) * .01) = 80.2%

not to 81%. Each 20 INT (and 10 WIS) adds 1% of the remaining distance between your current magic accuracy and 100%. It becomes increasingly harder (technically impossible) to reach 100% in any of these derived stats through primary attributes alone, but it can be done with the use of certain items.

[/quote]

A clearer exampl... (read more)

3DanielLC
The problem with this is it only makes sense when you have a high chance of success. Suppose I attempted to blow up the Earth. Normally, I'd have an approximately 0% chance of success. Would that bonus increase it to 50%?

The Turing machine doing the simulating does not experience pain, but the human being being simulated does.

Similarly, the waterfall argument found in the linked paper seems as though it could as-easily be used to argue that none of the humans in the solar system have intelligence unless there's an external observer to impose meaning on the neural patterns.

A lone mathematical equation is meaningless without a mind able to read it and understand what its squiggles can represent, but functioning neural patterns which respond to available stimuli causally(/thr... (read more)

(Assuming that it stays on the line of 'what is possible', in any case a higher Y than otherwise, but finding it then according to the constant X--1 - ((19/31) * (1/19)), 30/31, yes...)

I confess I do not understand the significance of the terms mixed outcome and weighted sum in this context, I do not see how the numbers 11/31 and 20/31 have been obtained, and I do not presently see how the same effect can apply in the second situation in which the relative positions of the symmetric point and its (Pareto?) lines have not been shifted, but I now see how in ... (read more)

Rather than X or Y succeeding at gaming it by lying, however, it seems that a disinterested objective procedure that selects by Pareto optimalness and symmetry would then output a (0.6, 0.6) outcome in both cases, causing a -0.35 utility loss for the liar in the first case and a -0.1 utility loss for the liar in the second.

Is there a direct reason that such an established procedure would be influenced by a perceived (0.95, 0.4) option to not choose an X=Y Pareto outcome? (If this is confirmed, then indeed my current position is mistaken. )

3Stuart_Armstrong
(0.6, 0.6) is not Pareto. The "equal Pareto outcome" is the point (19/31,19/31) which is about (0.62,0.62). This is a mixed outcome, the weighted sum of (0,1) and (0.95,0.4) with weights 11/31 and 20/31. In reality, for y's genuine utility, this would be 11/31(0,1) + 20/31(0.95,0.95)=(19/31,30/31), giving y a utility of about 0.97, greater than the 0.95 he would have got otherwise.

I may be missing something: for Figure 5, what motivation does Y have to go along with perceived choice (0.95, 0.4), given that in this situation Y does not possess the information possessed (and true) in the previous situation that '(0.95, 0.4)' is actually (0.95, 0.95)?

In Figure 2, (0.6, 0.6) appears symmetrical and Pareto optimal to X. In Figure 5, (0.6, 0.6) appears symmetrical and Pareto optimal to Y. In Figure 2, X has something to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6) and Y has something to gain by choosing/{allowi... (read more)

0Stuart_Armstrong
The point of the proof is that if there is an established procedure that takes as input people's stated utilities about certain choices, and outputs a Pareto outcome, then it must be possible to game it by lying. The motivations of the players aren't taken into account once their preferences are stated.
2HonoreDB
As Stuart_Armstrong explains to me on a different thread, the decision process isn't necessarily picking one of the discrete outcomes, but can pick a probabilistic mixture of outcomes. (.6,.6) doesn't appear Pareto-optimal because it's dominated by, e.g., selecting (.95, .4) with probability p=.6/.95 and (0,1) with probability 1-p.

A very interesting perspective: Thank you!

' I am still mystified by the second koan.': The novice associates {clothing types which past cults have used} with cults, and fears that his group's use of these clothing types suggests that the group may be cultish.

In practice (though the clothing may have an unrelated advantage), the clothing one wears has no effect on the validity of the logical arguments used in reasoning/debate.

The novice fears a perceived connection between the clothing and cultishness (where cultishness is taken to be a state of faith over rationality, or in any case irrationality... (read more)

In practice (though the clothing may have an unrelated advantage), the clothing one wears has no effect on the validity of the logical arguments used in reasoning/debate.

The purpose of the clothing is to make people aware of the dangers of cultishness, even though wearing identical clothing all else equal encourages cultishness. All else is not equal, it is a worthwhile cost to bring the issue to the fore and force people to compensate by thinking non-cultishly (not counter-cultishly).

A novice rationalist approached the master Ougi and said, "

... (read more)

Depending on the cost, it at least seems to be worth knowing about. If one doesn't have it then one can be assured on that point, whereas if one does have it then one at least has appropriate grounds on which to second-guess oneself.

(I have been horrified in the past by tales of {people who may or may not have inherited a dominant gene for definite early disease-related death} who all refused to be tested, thus dooming themselves to a lives of fear and uncertainty. If they were going to have entirely healthy lives then they would have lived in fear and u... (read more)

'I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem.':

If of relevance, note http://lesswrong.com/lw/q8/many_worlds_one_best_guess/ .

'The second AI helped you more, but it constrained your destiny less.': A very interesting sentence.

On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.

A particular situation that comes to mind, though:

Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what ... (read more)

Thought 1: If hypothetically one's family was going to die in an accident or otherwise (for valid causal wish-unrelated reasons), the added mental/emotional effect on oneself would be something to avoid in the first place. Given that one is infallible, one can never assert absolute knowledge of non-causality (direct or indirect), and that near-infinitesimal consideration could haunt one. Compare this possibility to the ease, normally, of taking other routes and thus avoiding that risk entirely.

...other thoughts are largely on the matter of integrity... ... (read more)

CEV document: I have at this point somewhat looked at it, but indeed I should ideally find time to read through it and think through it more thoroughly. I am aware that the sorts of questions I think of have very likely already been thought of by those who have spent many more hours thinking about the subject than I have, and am grateful that the time has been taken to answer ths specific thoughts that come to mind as initial reactions.


Reaction to the difference-showing example (simplified by the assumption that a sapient smarter-me is assumed to not e... (read more)

Diamond: Ahh. I note that looking at the equivalent diamond section, 'advise Fred to ask for box B instead' (hopefully including the explanation of one's knowledge of the presence of the desired diamond) is a notably potentially-helpful action, compared to the other listed options which can be variably undesirable.


Varying priorities: That I change over time is an accepted aspect of existence. There is uncertainty, granted; on the one hand I don't want to make decisions that a later self would be unable to reverse and might disapprove of, but on the o... (read more)

4[anonymous]
Both. I meant, in order for the AI not to (very probably) paperclip us. Our (or someone else’s) volitions are extrapolated in the initial dynamic. The output of this CEV may recommend that we ourselves are actually transformed in this or that way. However, extrapolating volition does not imply that the output is not for our own benefit! Speaking in a very loose sense for the sake of clarity: “If you were smarter, looking at the real world from the outside what actions would you want taking in the real world?” is the essential question – and the real world is one in which the humans that exist are not themselves coherently-extrapolated beings. The question is not “If a smarter you existed in the real world, what actions would it want taking in the real world?” See the difference? Hopefully the AI’s simulations of people are not sentient! It may be necessary for the AI to reduce the accuracy of its computations, in order to ensure that this is not the case. Again, Eliezer discusses this in the document on CEV which I would encourage you to read if you are interested in the subject.

Reading other comments, I note my thoughts on the undesirability of extrapolation have largely been addressed elsewhere already.


Current thoughts on giving higher preference to a subset:

Though one would be happy with a world reworked to fit one's personal system of values, others likely would not be. Though selected others would be happy with a world reworked to fit their agreed system of values, others likely would not be. Moreover, assuming changes over time, even if such is held to a certain degree at one point in time, changes based on that may turn... (read more)

Ahh. Thank you! I was then very likely at fault on that point, being familiar with the phrase yet not recognising the acronym.

I unfortunately lack time at the moment; rather than write a badly-thought-out response to the complete structure of reasoning considered, I will for the moment write fully-thought-out thoughts on minor parts thereof that my (?) mind/curiosity has seized on.


'As for “taking over the world by proxy”, again SUAM applies.': this sentence stands out, but glancing upwards and downwards does not immediately reveal what SUAM refers to. Ctrl+F and looking at all appearances of the term SUAM on the page does not reveal what SUAM refers to. The first page of Goo... (read more)

1[anonymous]
My brief recapitulation of Yudkowsky’s diamond example (which you can read in full in his CEV document) probably misled you a little bit. I expect that you would find Yudkowsky’s more thorough exposition of “extrapolating volition” somewhat more persuasive. He also warns about the obvious moral hazard involved in mere humans claiming to have extrapolated someone else’s volition out to significant distances – it would be quite proper for you to be alarmed about that! Taken to the extreme this belief would imply that every time you gain some knowledge, improve your logical abilities or are exposed to new memes, you are changed into a different person. I’m sure you don’t believe that – this is where the concept of “distance” comes into play: extrapolating to short distance (as in the diamond example) allows you to feel that the extrapolated version of yourself is still you, but medium or long distance extrapolation might cause you to see the extrapolated self as alien. It seems to me that whether a given extrapolation of you is still “you” is just a matter of definition. As such it is orthogonal to the question of the choice of CEV as an AI Friendliness proposal. If we accept that an FAI must take as input multiple human value sets in order for it to be safe – I think that Yudkowsky is very persuasive on this point in the sequences – then there has to be a way of getting useful output from those value sets. Since our existing value computations are inconsistent in themselves, let alone with each other the AI has to perform some kind of transformations to cohere a useful signal from this input – this screens off any question of whether we’d be happy to run with our existing values (although I’d certainly choose the extrapolated volition in any case). “Knowing more”, “thinking faster”, “growing up closer together” and so on seem like the optimal transformations for it to perform. Short-distance extrapolations are unlikely to get the job done, therefore medium or long-d
0Multipartite
Reading other comments, I note my thoughts on the undesirability of extrapolation have largely been addressed elsewhere already. ---------------------------------------- Current thoughts on giving higher preference to a subset: Though one would be happy with a world reworked to fit one's personal system of values, others likely would not be. Though selected others would be happy with a world reworked to fit their agreed system of values, others likely would not be. Moreover, assuming changes over time, even if such is held to a certain degree at one point in time, changes based on that may turn out to be regrettable. Given that one's own position (and those of any other subset) are liable to be riddled with flaws, multiplying may dictate that some alternative to the current situation in the world be provided, but it does not necessarily dictate that one must impose one subset's values on the rest of the world to the opposition of that rest of the world. Imposition of peace on those filled with hatred who thickly desire war results in a worsening of those individuals' situation. Imposition of war on those filled with love who strongly esire peace results in a worsening of those individuals' situation. Taking it as given that each subset's ideal outcome differs significantly from that of every other subset in the world, any overall change according to the will of one subset seems liable to yield more opposition and resentment than it does approval and gratitude. Notably, when thinking up a movement worth supporting, such an action is frightening and unstable--people with differing opinions climbing over each other to be the ones who determine the shape of the future for the rest. What, then, is an acceptable approach by which the wills coincide of all these people who are opposed to the wills of other groups being imposed on the unwilling? Perhaps to not remake the world in your own image, or even in the image of people you choose to be fit to remake the world
5Stuart_Armstrong
SUAM = shut up and multiply

I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination.

'prefer any number of people to experience the former pain, rather than one having to bear the latt... (read more)

Is the distribution necessary (other than as a thought experiment)?

Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is i... (read more)

('Should it fit in a pocket or backpack?': Robot chassis, please. 'Who is the user?': Hopefully the consciousness itself. O.O)

  • In general, make decisions according to the furtherance of your current set of priorities.
  • Personally, though I enjoy certain persistant-world games for their content and lasting internal advantages, the impression I've gotten from reading others' accounts of World of Warcraft compared to other games is that it takes up a disproportionate amount of time/effort/money compared to other sources of pleasure.

For that game, the sunk-costs fallacy and the training-to-do-random-things-infinitely phenomenon may help in speculating about why so many sink and cont... (read more)

Crocker's Rules: A significantly interesting formalisation that I had not come across before! Thank you!

On the one hand, even if someone doesn't accept responsibility for the operation of their own mind it seems that they nevertheless retain responsibility for the operation of their own mind. On the other hand, from a results-based (utilitarian?) perspective I see the problems that can result from treating an irresponsible entity as though they were responsible.

Unless judged it as having significant probability that one would shortly be stabbed, have... (read more)

Beware! Crocker's Rules is about being able to receive information as fast as possible, not to transmit it!

From Radical Honesty:

Crocker's Rules didn't give you the right to say anything offensive, but other people could say potentially offensive things to you, and it was your responsibility not to be offended. This was surprisingly hard to explain to people; many people would read the careful explanation and hear, "Crocker's Rules mean you can say offensive things to other people."

From wiki.lw:

In contrast to radical honesty, Crocker's rules

... (read more)
6dlthomas
Because the rest of the world operates without Crocker's Rules, treating someone as if they are is deemed to itself be a part of the message.

Note that these people believing this thing to be true does not in fact make it any likelier to be false. We judge it to be less {more likely to be true} than we would for a generic positing by a generic person, down to the point of no suspicion one way or the other, but this positing is not in fact reversed into a positive impression that something is false.

If one takes two otherwise-identical worlds (unlikely, I grant), one in which a large body of people X posit Y for (patently?) fallacious reasons and one in which that large body of people posit the c... (read more)

Thank you!

If I may ask, is there a location in wiki.lesswrong.com or elsewhere which describes how to use quote-bars (of the type used in your comment for my paragraph) and similar?

4gwern
It's just Markdown. In fact, every time you reply, the 'Help' hyperlink is a mini-tutorial on Markdown.

Indeed. nods

If sacrifice of myself was necessary to (hope to?) save the person mentioned, I hope that I would {be consistent with my current perception of my likely actions} and go through with it, though I do not claim complete certainty of my actions.

If those that would die from the hypothetical disease were the soon-to-die-anyway (very elderly/infirm), I would likely choose to spend my time on more significant areas of research (life extension, more-fatal/-painful diseases).

If all other significant areas had been dealt with or were being adequately dea... (read more)

I note the answer to this seems particularly straightforward if the few dozen who would probably die would also have been young and healthy at the time. Even more convenient if the subject is a volunteer, and/or if the experimentor (possibly with a staff of non-sentient robot record-keepers and essay-compilers, rather than humans?) did them on himself/herself/themself(?).

(I personally have an extremely strong desire to survive eternally, but I understand there are (/have historically been) people who would willingly risk death or even die for certain in o... (read more)

2[anonymous]
Assume the least-convenient possible world. It's not like this one is fair either...

Humanity can likely be assumed to, under those conditions, balloon out to its former proportions (following the pattern of population increase to the point that available resources can no longer support further increase).

One possibility is that this would represent a delay in the current path and not much else, though depending on the length of time needed to rebuild our infrastructure it could make a large difference in efforts to establish humanity as safely redundant (outside Earth, that is).

Another possibility (the Oryx and Crake concept) is that due t... (read more)

3NancyLebovitz
I've wondered whether landfills could be viewed as extremely high-grade ore compared to what's naturally available.
0[anonymous]
Actually, up-to-date modelling suggests that even a "minor" nuclear war between only two combatants, with 50 warheads apiece would be enough to render global agriculture impossible for a year or longer. The concomitant effects on hunter-gatherers are probably similarly devastating. If some portion of humanity does survive that first year, I wouldn't be so very optimistic they're close enough to each other to make rebuilding a minimal viable population easy, let alone that the memories of the recently-destroyed global infrastructure are sufficiently present and relevant to be worth carrying forward to their descendants as anything other than a cautionary tale. What you're looking at is basically a remote chance that some really isolated group in say, the Far North or an island in the Pacific manages to hold it together in a hunter-gatherer kinda way and do so for long enough that their population doesn't collapse. I'm betting you still don't see them expand in any meaningful way for centuries after the fact, and the environmental damage may constrain even that for a lot longer.
1JoshuaZ
The Oryx and Crake idea has been discussed seriously by Nick Bostrom. One thing to keep in mind is that for some metals things will be easier the second time around. The really prominent example is aluminum. It takes a lot of technology and infrastructure to refine aluminum (for most of the 19th century its price rivaled or exceeded that of gold). But, aluminum once it has been purified is really easy to work with. So one would have all sorts of aluminum just left around ready to use. Nuclear war makes that situation slightly worse because a lot of the aluminum will now be in radioactive cities. But overall, you'll still have easily accessible quantities of a light, strong metal that no one in the middle ages had anything like.

I note that while most of the examples seem reasonable, the Dictator instance seems to stand out: by accepting the trumped-up prospector excuse as admissible, the organisation is agreeing to any similarly flimsy excuse that a country could make (e.g. the route not taken in The Sports Fan). The Lazy Student also comes to mind in terms of being an organisation that would accept such an argument, thus others also making it.

(Hm... I wonder if a valid equivalent of the Grieving case would be if the other country had in fact launched an easily-verifiable full... (read more)

Greetings. I apologise for possible oversecretiveness, but for the moment I prefer to remain in relative anonymity; this is a moniker used online and mostly kept from overlapping with my legal identity.

Though in a sense there is consistency of identity, for fairness I should likely note that my use of first-person pronouns may not always be entirely appropriate.

Personal interest in the Singularity can probably be ultimately traced back to the fiction Deus Ex, though I hope it would have reached it eventually even without it as a starting point; my experie... (read more)

0Normal_Anomaly
Welcome to Less Wrong! Nobody minds if you keep your information secret; I keep my legal identity pretty separate from my Normal_Anomaly identity as well, and I'm not alone in this.

(Unlurking and creating an account for use from this point onwards; どうかお手柔らかにお願いします。)

Something I found curious in the reading of the comments for this article is the perception that Bouzo took away the conclusion that clothing was in fact important for probability.

Airing my initial impression for possible contrast (/as an indication of my uncertainty): When I read the last sentence, I imagined an unwritten 'And in that moment the novice was enlightened', mirroring the structure of certain koans I once glanced through.

My interpretation is/was that those wo... (read more)

3Oscar_Cunningham
The "Recent Comments" section on the right hand side displays the five most recent comments. So even comments on old posts have a chance to catch the eye one who is browsing. Some even subscribe to all the comments using RSS,