Comment author: Pimgd 06 June 2016 09:10:27AM *  0 points [-]

Maybe I just don't get it, but offering me the option AFTER you've told me that it makes no difference makes it a pointless option. I get the feeling there's a single step missing from your explanation.

From what I'm reading, there's 3 things that can happen...

You spend $100. A prophet comes to you and tells you that you will lose $10,000 in the future, and then...
1) looks at you more closely, and coughs "wait, you're not the person I was looking for."
2) tells you something that sounds plausibly true, but turns out to be false, costing you $10,000 by overpaying for your next house.
1 & 2 happen with 50% chance, if you have spent $100.
If you don't spend $100, then
3) A prophet comes to you and tells you that you will lose $10,000 in the future, and then afterwards as you sputter "why", he tells you this plausibly true thing that turns out to be false, costing you $10,000 by overpaying for your next house.

(As for what the thing is, it's either something that makes you spend $10,000 in carefully rationalizing your decision to buy a house, or $10,000 costs in overbidding)

But... you've just told me that a prophet came to me and told me I will lose $10,000 in the future. I am already on path 3. There is no going back. Time CAN create time loops but there is no cause for it to do so in your explanation. You yourself walled it off by stating the prophecy was self-fulfilling and that you could spend $100 "if the prophecy weren't immutably correct" (this in a manner implying that it is immutably correct).

You have given me a button, but the button is disabled. I can't take any actions.

Also, there's something lurking in your description which might (I really am unsure) imply that if I spend $100, the world may become inconsistent and therefore stop disappearing. Basically replace path 1 with "universe ends." Which would make spending $100 really bad, since, losing $10,000 is preferable to destroying your own universe?

Comment author: wafflepudding 06 June 2016 04:40:52PM 0 points [-]

You are on path 3, but the button is not disabled. The purpose of spending the $100 is to decrease the number of possible worlds where the prophet would come up and talk to you in the first place. You wouldn't end up destroying your timeline by making it inconsistent; ideally, this timeline was just never created because if it had been you would've spent the $100.

Out of curiosity, would you pay Omega in the counterfactual mugging? If you'd pay in CF but not here, that makes me worry that this formulation isn't similar.

Counterfactual Mugging Alternative

-1 wafflepudding 06 June 2016 06:53AM

Edit as of June 13th, 2016: I no longer believe this to be easier to understand than traditional CM, but stand by the rest of it. Minor aesthetic edits made.

First post on the LW discussion board. Not sure if something like this has already been written, need your feedback to let me know if I’m doing something wrong or breaking useful conventions.

An alternative to the counterfactual mugging, since people often require it explained a few times before they understand it -- this one I think will be faster for most to comprehend because it arose organically, not seeming specifically contrived to create a dilemma between decision theories:

Pretend you live in a world where time travel exists and Time can create realities with acausal loops, or of ordinary linear chronology, or another structure, so long as there is no paradox -- only self-consistent timelines can be generated. 

In your timeline, there are prophets. A prophet (known to you to be honest and truly prophetic) tells you that you will commit an act which seems horrendously imprudent or problematic. It is an act whose effect will be on the scale of losing $10,000; an act you never would have taken ordinarily. But fight the prophecy all you want, it is self-fulfilling and you definitely live in a timeline where the act gets committed. However, if it weren’t for the prophecy being immutably correct, you could have spent $100 and, even having heard the prophecy (even having believed it would be immutable) the probability of you taking that action would be reduced by, say, 50%. So fighting the prophecy by spending $100 would mean that there were 50% fewer self-consistent (possible) worlds where you lost the $10,000, because its just much less likely for you to end up taking that action if you fight it rather than succumbing to it.

You may feel that there would be no reason to spend $100 averting a decision that you know you’re going to make, and see no reason to care about counterfactual worlds  where you don’t lose the $10,000. But the fact of the matter is that if you could have precommitted to fight the choice you would have, because in the worlds where that prophecy could have been presented to you, you’d be decreasing the average disutility by (($10,000)(.5 probability) - ($100) = $4,900). Not following a precommitment that you would have made to prevent the exact situation which you’re now in because you wouldn’t have followed the precommitment seems an obvious failure mode, but UDT successfully does the same calculation shown above and tells you to fight the prophecy. The simple fact that should tell causal decision theorists that converting to UDT is the causally optimal decision is that Updateless Decision Theorists actually do better on average than CDT proponents.

 

(You may assume also that your timeline is the only timeline that exists, so as not to further complicate the problem by your degree of empathy with your selves from other existing timelines.)

Comment author: wafflepudding 24 May 2016 02:08:20AM 0 points [-]

Hmm… does some instance of utility get multiplied by the number of people who find it utilitous? Like, if there are twice as many humans, does that mean that one Babyeater baby eaten subtracts twice as much from group utility?

In response to Timeless Physics
Comment author: wafflepudding 15 April 2016 01:40:44AM 0 points [-]

An omnipotent magicker decides to flip a coin, and the coin lands heads. Afterwards, the magicker changes every particle in the universe to what it would be had the coin landed tails -- including those in his own brain. Is it true that in the past, the coin landed heads, even though this event is epiphenomenal?

I realize that the magicker is violating the laws of entropy, and that in the real world there are no magickers. I also realize that for the purposes of anyone in the universe, the first coin flip doesn't and couldn't possibly matter, because it was epiphenomenal. But I'm still curious what the answer to my question is.

In response to Zombies! Zombies?
Comment author: Richard4 04 April 2008 03:37:54PM 11 points [-]

Eliezer - thanks for this post, it's certainly an improvement on some of the previous ones. A quick bibliographical note: Chalmers' website offers his latest papers, and so is a much better source than google books. A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call "apparently conceivable". That is, you view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible. On to the substantive points:

(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.

(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws). That is, it's "miraculous" in the same sense that it's "miraculous" that our universe is fit to support life. Atheists and other opponents of fine-tuning arguments are not usually so troubled by this kind of alleged 'miracle'. Just because things logically could have been different, doesn't mean that they easily could have been different. Natural laws are pretty safe and dependable things. They are primitive facts, not explained by anything else, but that doesn't make them chancy.

(3) I'd also dispute the following characterization: "talk about consciousness... arises from a malfunction (drawing of logically unwarranted conclusions) in the causally closed cognitive system that types philosophy papers."

No, typing the letters 'c-o-n-s-c-i-o-u-s-n-e-s-s' arises from a causally closed cognitive system. Whether these letters actually mean anything (and so constitute a contentful conclusion that may or may not follow from other contentful premises) arguably depends on whether the agent is conscious. (Utterances express beliefs, and beliefs are partly constituted by the phenomenal properties instantiated by their neural underpinnings.) That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird. (You can predict the zombie's behaviour by adopting the Dennettian pretense of the 'intentional stance', i.e. interpreting the zombie as if it really had beliefs and desires. But that's mere pretense.)

(4) I'm all for 'reflective coherence' (at least if that means what I think it means). I don't see how it counts against this view, unless you illicitly assume a causal theory of knowledge (which I obviously don't).

P.S. Note that while I'm a fan of epiphenomenalism myself, Chalmers doesn't actually commit to the view. See his response to Perry for more detail. (It also addresses many of the other points you raise in this post.)

In response to comment by Richard4 on Zombies! Zombies?
Comment author: wafflepudding 04 March 2016 04:16:36AM 0 points [-]

On (3), if Zombie Chalmers can't be correct or incorrect about consciousness -- as in, he's just making noise when he says "consciousness" -- does the same hold for his beliefs on anything else? Like, Zombie Chalmers also (probably) says "the sun will rise tomorrow," but would you also question whether these letters actually mean anything? In both the cases of the sun's rising and epiphenomenalism's truth, Zombie Chalmers is commenting on an actual way that reality can be. Is there a difference? Or, does Zombie Chalmers have no beliefs about anything? I'd think that a zombie could be thought to have beliefs as far as some advanced AI could.

In response to Worse Than Random
Comment author: Caledonian2 11 November 2008 07:33:35PM 14 points [-]

But it is an inherently odd proposition that you can get a better picture of the environment by adding noise to your sensory information - by deliberately throwing away your sensory acuity. This can only degrade the mutual information between yourself and the environment. It can only diminish what in principle can be extracted from the data.

It is certainly counterintuitive to think that, by adding noise, you can get more out of data. But it is nevertheless true.

Every detection system has a perceptual threshold, a level of stimulation needed for it to register a signal. If the system is mostly noise-free, this threshold is a ’sharp’ transition. If the system has a lot of noise, the theshold is ‘fuzzy’. The noise present at one moment might destructively interact with the signal, reducing its strength, or constructively interact, making it stronger. The result is that the threshold becomes an average; it is no longer possible to know whether the system will respond merely by considering the strength of the signal.

When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it - some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively.

You can see this effect demonstrated at science museums. If an image is printed very, very faintly on white paper, just at the human threshold for visual detection, you can stare right at the paper and not see what’s there. But if the same image is printed onto paper on which a random pattern of grey dots has also been printed, we can suddenly perceive some of it - and extrapolate the whole from the random parts we can see. We are very good at extracting data from noisy systems, but only if we can perceive the data in the first place. The noise makes it possible to detect the data carried by weak signals.

When trying to make out faint signals, static can be beneficial. Which is why biological organisms introduce noise into their detection physiologies - a fact which surprised biologists when they first learned of it.

Comment author: wafflepudding 31 December 2015 08:14:42AM 0 points [-]

This post is my first experience learning about noise in algorithms, so forgive me if I seem underinformed. Two points occurred to me while reading this comment, some clarification would be great:

First, while it was intriguing to read that input just below the perceptual threshold would half the time be perceived by bumping it above the threshold, it seems to me that input just above the threshold would half the time be knocked below it. So wouldn't noise lead to no gain? Just a loss in acuity?

Second, I'm confused how input below the perceptual threshold is actually input. If a chair moves in front of a camera so slightly that the camera doesn't register a change in position, the input seems to me like zero, and noise loud enough to move zero past the perceptual threshold would not distinguish between movement and stillness, but go off half the time and half the time be silent. If that doesn't make sense, assume that the threshold is .1 meters, and the camera doesn't notice any movement less than that. Let's say your noise is a random number between .01 meters and -.01 meters. The chair moves .09 meters, and your noise lands on .01 meters. I wouldn't think that would cross the threshold, because the camera can't actually detect that .09 meters if it's threshold is .1. So, wouldn't the input just be 0 motion detected + .01 meters of noise = .01 meters of motion? Maybe I'm misunderstanding.

Comment author: Viliam 07 December 2015 09:23:26AM 6 points [-]

To say the most obvious thing, the quality threshold for comments should be much lower than for articles. And maybe these should be also some "chat" area where comments just appear and disappear without voting, so that no one would hesitate to post there; and then after receiving some positive feedback they would feel comfortable with posting regular comments.

Maybe there could be a special posting mode for newcomers, which would provide some advantages and disadvantages, like training wheels. For example it would not display negative comment karma (karma below zero would be displayed as zero), it could encourage specific verbal feedback which would be visible only to the comment author (or perhaps require downvoters to select one of predefined explanations, such as "you were rude" or "you promoted pseudoscience"), but it would also limit the number of comments per day and per thread (to prevent spamming by people who can't take a hint). After receiving enough total karma, the newbie mode would be turned off. -- That's just a quick idea, maybe completely wrong.

Or maybe we could encourage people being nice to each other by giving positive feedback additionally to upvotes. Such as "this is nice" or "thank you for the research", which would be displayed as small icons above the comment. Generally, to add some optional flavor to the numbers, whether positive or negative.

In response to comment by Viliam on LessWrong 2.0
Comment author: wafflepudding 23 December 2015 12:46:14AM 3 points [-]

In reading the Sequences, I feel weird about replying to comments because most of them are from seven years ago. Is it frowned upon to respond to something crazy old and possibly obsolete?

Comment author: wafflepudding 13 November 2015 03:52:51PM 0 points [-]

I love this series. Except, I have very particularly been in an argument where I said the phrase, "Hinduism is, by definition, a religion." Isn't agreement on common usage useful if you want to communicate efficiently? Maybe Wiggin shouldn't be used commonly, but one person defining Wiggin in a manner that contradicts the dictionary definition certainly doesn't do anyone any favors. And I think it's fine for common usage to define humans as mortal, as long as it consistently assumes that Socrates is inhuman when he goes on living forever.

Comment author: wafflepudding 27 September 2015 06:05:20PM 0 points [-]

I disagree. Agreeing on term definitions beforehand would solve all of these problems: The definition of religion is not "something that answers theological questions," therefore the By Definition argument is ineffective for proving that atheism is a religion. (Incidentally, if that were the definition of religion, then atheism would be a religion.) For Hinduism, if someone tried to tell me that it was not a religion, I would necessarily use the definition of religion to prove them wrong. If Hinduism did not fit the definition of religion, it would not be a religion.

Comment author: wafflepudding 26 September 2015 01:58:07AM 0 points [-]

This hurts my image of Freud. Of course, after I have a dream about skyscrapers, he can explain that it's connected to my love of my phallus, but could he predict my love of my phallus based on a dream about skyscrapers?

View more: Prev | Next