(For thoroughness, noting that the other approach was also wondered about a little earlier. Surface action is an alternative to look at if projectile-launching would definitely be ineffective, but if the projectile approach would in fact be better then there'd no reason not to focus on it instead.)
A fair point. On the subject of pulling vast quantities of energy from nowhere, does any one country currently possess the knowledge and materials to build a bomb that detonated on the surface could {split the Earth like a grape}/{smash the Earth like an egg}/{dramatic verb the Earth like a metaphorical noun}?
And yes, not something to try in practice with an inhabited location. Perhaps a computer model, at most... actually, there's a thought regarding morbid fascination. I wonder what would be necessary to provide a sufficiently-realistic (uninhabite...
Not directly related, but an easier question: Do we currently have the technology to launch projectiles out of Earth's atmosphere into a path such that, in a year's time or so, the planet smashes into them from the other direction and sustains significant damage?
(Ignoring questions of targeting specific points, just the question of whether it's possible to arrange that without the projectiles falling into the sun or just following us eternally without being struck or getting caught in our gravity well too soon... hmm, if we could somehow put it into an o...
()
In practice, this seems to break down at a specific point: this can be outlined, for instance, with the hypothetical stipulation "...and possesses the technology or similar power to cross universe boundaries and appear visible before me in my room, and will do so in exactly ten seconds.".
As with the fallacy of a certain ontological argument, the imagination/definition of something does not make it existential, and even if a certain concept contains no apparent inherent logical impossibilities that still does not mean that there could/would exi...
(Absent(?) thought after reading: one can imagine someone, through a brain-scanner or similar, controlling a robot remotely. One can utter, through the robot, "I'm not actually here.", where 'here' is where one is doing the uttering through the robot, and 'I' (specifically 'where I am') is the location of one's brain. The distinction between the claim 'I'm not actually here' and 'I'm not actually where I am' is notable. Ahh, the usefulness of technology. For belated communication, the part about intention is indeed significant, as with whether a diary is written in the present tense (time of writing) or in the past tense ('by the time you read this[ I will have]'...).) enjoyed the approach
To ask the main question that the first link brings to mind: What prevents a person from paying both a life insurance company and a longevity insurance company (possible the same company) relatively-small amounts of money each in exchange for either a relatively-large payout from the life insurance if the person dies early and a relatively-large payout from the longevity insurance if the person dies late?
To extend, what prevents a hypothetically large number of people to on average create this effect (even if each is disallowed from having both instead of just one or the other) and so creating a guaranteed total loss overall on the part of an insurance company?
To answer the earlier question, an alteration which halved the probability of failure would indeed change an exactly-0% probability of success into a 50% probability of success.
If one is choosing between lower increases for higher values, unchanged increases for higher values, and greater increases for higher values, then the first has the advantage of not quickly giving numbers over 100%. I note though that the opposite effect (such as hexing a foe?) would require halving the probability of success instead of doubling the probability of failure.
The eff...
For what it's worth, I'm reminded of systems which handle modifiers (multiplicatively) according to the chance of failure:
For example, the first 20 INT increases magic accuracy from 80% to
(80% + (100% - 80%) * .01) = 80.2%
[/quote]
A clearer exampl...
The Turing machine doing the simulating does not experience pain, but the human being being simulated does.
Similarly, the waterfall argument found in the linked paper seems as though it could as-easily be used to argue that none of the humans in the solar system have intelligence unless there's an external observer to impose meaning on the neural patterns.
A lone mathematical equation is meaningless without a mind able to read it and understand what its squiggles can represent, but functioning neural patterns which respond to available stimuli causally(/thr...
(Assuming that it stays on the line of 'what is possible', in any case a higher Y than otherwise, but finding it then according to the constant X--1 - ((19/31) * (1/19)), 30/31, yes...)
I confess I do not understand the significance of the terms mixed outcome and weighted sum in this context, I do not see how the numbers 11/31 and 20/31 have been obtained, and I do not presently see how the same effect can apply in the second situation in which the relative positions of the symmetric point and its (Pareto?) lines have not been shifted, but I now see how in ...
Rather than X or Y succeeding at gaming it by lying, however, it seems that a disinterested objective procedure that selects by Pareto optimalness and symmetry would then output a (0.6, 0.6) outcome in both cases, causing a -0.35 utility loss for the liar in the first case and a -0.1 utility loss for the liar in the second.
Is there a direct reason that such an established procedure would be influenced by a perceived (0.95, 0.4) option to not choose an X=Y Pareto outcome? (If this is confirmed, then indeed my current position is mistaken. )
I may be missing something: for Figure 5, what motivation does Y have to go along with perceived choice (0.95, 0.4), given that in this situation Y does not possess the information possessed (and true) in the previous situation that '(0.95, 0.4)' is actually (0.95, 0.95)?
In Figure 2, (0.6, 0.6) appears symmetrical and Pareto optimal to X. In Figure 5, (0.6, 0.6) appears symmetrical and Pareto optimal to Y. In Figure 2, X has something to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6) and Y has something to gain by choosing/{allowi...
' I am still mystified by the second koan.': The novice associates {clothing types which past cults have used} with cults, and fears that his group's use of these clothing types suggests that the group may be cultish.
In practice (though the clothing may have an unrelated advantage), the clothing one wears has no effect on the validity of the logical arguments used in reasoning/debate.
The novice fears a perceived connection between the clothing and cultishness (where cultishness is taken to be a state of faith over rationality, or in any case irrationality...
In practice (though the clothing may have an unrelated advantage), the clothing one wears has no effect on the validity of the logical arguments used in reasoning/debate.
The purpose of the clothing is to make people aware of the dangers of cultishness, even though wearing identical clothing all else equal encourages cultishness. All else is not equal, it is a worthwhile cost to bring the issue to the fore and force people to compensate by thinking non-cultishly (not counter-cultishly).
...A novice rationalist approached the master Ougi and said, "
Depending on the cost, it at least seems to be worth knowing about. If one doesn't have it then one can be assured on that point, whereas if one does have it then one at least has appropriate grounds on which to second-guess oneself.
(I have been horrified in the past by tales of {people who may or may not have inherited a dominant gene for definite early disease-related death} who all refused to be tested, thus dooming themselves to a lives of fear and uncertainty. If they were going to have entirely healthy lives then they would have lived in fear and u...
'I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem.':
If of relevance, note http://lesswrong.com/lw/q8/many_worlds_one_best_guess/ .
'The second AI helped you more, but it constrained your destiny less.': A very interesting sentence.
On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.
A particular situation that comes to mind, though:
Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what ...
Thought 1: If hypothetically one's family was going to die in an accident or otherwise (for valid causal wish-unrelated reasons), the added mental/emotional effect on oneself would be something to avoid in the first place. Given that one is infallible, one can never assert absolute knowledge of non-causality (direct or indirect), and that near-infinitesimal consideration could haunt one. Compare this possibility to the ease, normally, of taking other routes and thus avoiding that risk entirely.
...other thoughts are largely on the matter of integrity... ...
CEV document: I have at this point somewhat looked at it, but indeed I should ideally find time to read through it and think through it more thoroughly. I am aware that the sorts of questions I think of have very likely already been thought of by those who have spent many more hours thinking about the subject than I have, and am grateful that the time has been taken to answer ths specific thoughts that come to mind as initial reactions.
Reaction to the difference-showing example (simplified by the assumption that a sapient smarter-me is assumed to not e...
Diamond: Ahh. I note that looking at the equivalent diamond section, 'advise Fred to ask for box B instead' (hopefully including the explanation of one's knowledge of the presence of the desired diamond) is a notably potentially-helpful action, compared to the other listed options which can be variably undesirable.
Varying priorities: That I change over time is an accepted aspect of existence. There is uncertainty, granted; on the one hand I don't want to make decisions that a later self would be unable to reverse and might disapprove of, but on the o...
Reading other comments, I note my thoughts on the undesirability of extrapolation have largely been addressed elsewhere already.
Current thoughts on giving higher preference to a subset:
Though one would be happy with a world reworked to fit one's personal system of values, others likely would not be. Though selected others would be happy with a world reworked to fit their agreed system of values, others likely would not be. Moreover, assuming changes over time, even if such is held to a certain degree at one point in time, changes based on that may turn...
I unfortunately lack time at the moment; rather than write a badly-thought-out response to the complete structure of reasoning considered, I will for the moment write fully-thought-out thoughts on minor parts thereof that my (?) mind/curiosity has seized on.
'As for “taking over the world by proxy”, again SUAM applies.': this sentence stands out, but glancing upwards and downwards does not immediately reveal what SUAM refers to. Ctrl+F and looking at all appearances of the term SUAM on the page does not reveal what SUAM refers to. The first page of Goo...
I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination.
'prefer any number of people to experience the former pain, rather than one having to bear the latt...
Is the distribution necessary (other than as a thought experiment)?
Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is i...
For that game, the sunk-costs fallacy and the training-to-do-random-things-infinitely phenomenon may help in speculating about why so many sink and cont...
On the one hand, even if someone doesn't accept responsibility for the operation of their own mind it seems that they nevertheless retain responsibility for the operation of their own mind. On the other hand, from a results-based (utilitarian?) perspective I see the problems that can result from treating an irresponsible entity as though they were responsible.
Unless judged it as having significant probability that one would shortly be stabbed, have...
Beware! Crocker's Rules is about being able to receive information as fast as possible, not to transmit it!
From Radical Honesty:
Crocker's Rules didn't give you the right to say anything offensive, but other people could say potentially offensive things to you, and it was your responsibility not to be offended. This was surprisingly hard to explain to people; many people would read the careful explanation and hear, "Crocker's Rules mean you can say offensive things to other people."
From wiki.lw:
...In contrast to radical honesty, Crocker's rules
Note that these people believing this thing to be true does not in fact make it any likelier to be false. We judge it to be less {more likely to be true} than we would for a generic positing by a generic person, down to the point of no suspicion one way or the other, but this positing is not in fact reversed into a positive impression that something is false.
If one takes two otherwise-identical worlds (unlikely, I grant), one in which a large body of people X posit Y for (patently?) fallacious reasons and one in which that large body of people posit the c...
Indeed. nods
If sacrifice of myself was necessary to (hope to?) save the person mentioned, I hope that I would {be consistent with my current perception of my likely actions} and go through with it, though I do not claim complete certainty of my actions.
If those that would die from the hypothetical disease were the soon-to-die-anyway (very elderly/infirm), I would likely choose to spend my time on more significant areas of research (life extension, more-fatal/-painful diseases).
If all other significant areas had been dealt with or were being adequately dea...
I note the answer to this seems particularly straightforward if the few dozen who would probably die would also have been young and healthy at the time. Even more convenient if the subject is a volunteer, and/or if the experimentor (possibly with a staff of non-sentient robot record-keepers and essay-compilers, rather than humans?) did them on himself/herself/themself(?).
(I personally have an extremely strong desire to survive eternally, but I understand there are (/have historically been) people who would willingly risk death or even die for certain in o...
Humanity can likely be assumed to, under those conditions, balloon out to its former proportions (following the pattern of population increase to the point that available resources can no longer support further increase).
One possibility is that this would represent a delay in the current path and not much else, though depending on the length of time needed to rebuild our infrastructure it could make a large difference in efforts to establish humanity as safely redundant (outside Earth, that is).
Another possibility (the Oryx and Crake concept) is that due t...
I note that while most of the examples seem reasonable, the Dictator instance seems to stand out: by accepting the trumped-up prospector excuse as admissible, the organisation is agreeing to any similarly flimsy excuse that a country could make (e.g. the route not taken in The Sports Fan). The Lazy Student also comes to mind in terms of being an organisation that would accept such an argument, thus others also making it.
(Hm... I wonder if a valid equivalent of the Grieving case would be if the other country had in fact launched an easily-verifiable full...
Greetings. I apologise for possible oversecretiveness, but for the moment I prefer to remain in relative anonymity; this is a moniker used online and mostly kept from overlapping with my legal identity.
Though in a sense there is consistency of identity, for fairness I should likely note that my use of first-person pronouns may not always be entirely appropriate.
Personal interest in the Singularity can probably be ultimately traced back to the fiction Deus Ex, though I hope it would have reached it eventually even without it as a starting point; my experie...
(Unlurking and creating an account for use from this point onwards; どうかお手柔らかにお願いします。)
Something I found curious in the reading of the comments for this article is the perception that Bouzo took away the conclusion that clothing was in fact important for probability.
Airing my initial impression for possible contrast (/as an indication of my uncertainty): When I read the last sentence, I imagined an unwritten 'And in that moment the novice was enlightened', mirroring the structure of certain koans I once glanced through.
My interpretation is/was that those wo...
Running through this to check that my wetware handles it consistently.
Paying -100 if asked:
When the coin is flipped, one's probability branch splits into a 0.5 of oneself in the 'simulation' branch, 0.5 in the 'real' branch. For the 0.5 in the real branch, upon awaking a subjective 50% probability that on either of the two possible days, both of which will be woken on. So, 0.5 of the time waking in simulation, 0.25 waking in real 1, 0.25 waking in real 2.
0.5 x (260) + 0.25 x (-100) + 0.25 x (-100) = 80. However, this is the expected cash-balance change... (read more)