Comment author: Arielgenesis 29 July 2016 03:25:50AM 0 points [-]

I will have to copy paste my answer to your other comment:

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

Am I not allowed to use such narrative technique to simplify my story and deliver my point? Yes I know it is out of touch with the human condition but I was hoping it would not strain my audiences' suspension of disbelieve.

Comment author: buybuydandavis 29 July 2016 09:23:14PM 1 point [-]

The problem is that the unrealistic simplification acts precisely on the factor you're trying to analyze - falsifiability. If you relax the unrealistic assumption, the point you're trying to make about falsifiabilty no longer holds.

Comment author: Clarity 27 July 2016 10:00:15PM 1 point [-]

Why all these downvotes? Downvoters, you realise you are the reason people continue to leave LessWrong and you're killing this place.

Comment author: buybuydandavis 28 July 2016 12:38:28PM 6 points [-]

I considered downvoting. I opted instead to ignore after reading the preamble, which told me nothing but

I talked to a guy about solving his problem. I don't think it worked. Tell me if you have an interesting insight.

while taking 3 paragraphs to do it, with page after page after page of dialogue following,

I'm generally for letting anyone share what they have to share, but the tone of the preamble screams of low budget wannabe internet crank [TheProblem(tm), among other issues of tone] , and given that many have a greater signal to noise threshold than I do, I suspect the downvotes were responses to having their crank detector pinged.

I struggled with responding to this, as I don't want to discourage people generally from sending in even the half baked, but this kind of thing also makes people leave LessWrong.

Comment author: Arielgenesis 28 July 2016 05:56:43AM 0 points [-]

Well... That's part of the story. I'm sure there is a term for it, but I don't know what. Something that the story gives and you accept it as fact.

Comment author: buybuydandavis 28 July 2016 12:06:52PM *  1 point [-]

That kind of knowledge is not part of the human condition. By making it a presupposition of your story, you render your hypothetical inapplicable to actual human life.

Comment author: Arielgenesis 25 July 2016 05:29:30PM 0 points [-]

The idea of the story is that there are no evidence. Because I think, in real life, sometimes, there are important and relevant things with no evidence. In this case, Adam's innocence is important and relevant to Eve (for emotional and social reasons I presume), but there is no, and there will never be, evidence. Given that, saying: "If there is evidence, then the belief could be falsified." is a kind of cheating because producing new evidence is not possible anymore.

Comment author: buybuydandavis 27 July 2016 12:30:53PM 0 points [-]

because producing new evidence is not possible anymore.

How do you claim to know that?

Comment author: buybuydandavis 27 July 2016 12:29:03PM 0 points [-]

The search was so thorough, there could never be any new evidence about what Adam had did before the custody that could be presented in the future.

Belief in absolute, dogmatic claims on the lack of evidentiary value of possible future observations leads to unfalsifiable conclusions.

Eve is irrational to conclude that her inability to conceive of a possible future observation to change her mind means that it is impossible for such an observation to happen.

As an aside, I believe you can make a more sciency argument with recent cosmological theories. There is something about a future state of the universe where all our current evidence for the Big Bang would cease to be observable, and all we could observe was our own galaxy.

Comment author: Stuart_Armstrong 22 July 2016 06:51:28PM 1 point [-]

ABABABABABAB...

It's deterministic, but not memoryless.

But it really does seem that there is a difference between facing an environment and another player - the other player adapts to your strategy in a way the environment doesn't. The environment only adapts to your actions.

I think for unbounded agents facing the environment, a deterministic policy is always optimal, but this might not be the case for bounded agents.

Comment author: buybuydandavis 22 July 2016 10:56:22PM 1 point [-]

I always had the informal impression that the optimal policies were deterministic

So an impression that optimal memoryless polices were deterministic?

That seems even less likely to me. If the environment has state, and you're not allowed to, you're playing at a disadvantage. Randomness is one way to counter state when you don't have state.

But it really does seem that there is a difference between facing an environment and another player - the other player adapts to your strategy in a way the environment doesn't. The environment only adapts to your actions.

I still don't see a difference. Your strategy is only known from your actions by both another player and the environment, so they're in the same boat.

Labeling something the environment or a player seems arbitrary and irrelevant. What capabilities are we talking about? Are these terms of art for which some standard specifying capability exists?

What formal distinctions have been made between players and environments?

Comment author: buybuydandavis 21 July 2016 12:21:25PM 1 point [-]

I always had the informal impression that the optimal policies were deterministic

Really? I wouldn't have ever thought that at all. Why do you think you thought that?

when facing the environment rather that other players. But stochastic policies can also be needed if the environment is partially observable

Isn't kind of what a player is? Part of the environment with a strategy and only partially observable states?

Although for this player, don't you have an optimal strategy, except for the first move? The Markov "Player" seems to like change.

Isn't this strategy basically optimal? ABABABABABAB... Deterministic, just not the same every round. Am I missing something?

Comment author: buybuydandavis 18 July 2016 11:43:25AM 0 points [-]

it feels like it will only take a few minutes

Does it? Do you really feel that way? Let me suggest that maybe that's not an accurate description of what you feel.

I feel like I'm not committing to more than a few minutes, even though if I had thought of it, I would estimate I'd be spending more than a few minutes.

To me, the problem is in incrementalism. I read a tweet. I read another. No particular tweet is a huge time investment. (Just like no particular M&M makes you fat). So it's tweet after tweet after tweet. I'm never facing one tweet that is going to be a big time investment. So it's tweet after tweet after tweet.

Hmmm. And maybe that's the secret. Getting fat eating M&Ms is easy to visualize. And conceptualize. I need a similarly meaningful vision of the lacking in accomplishment me, that eating tweets will turn me into.

Inverse procrastination

I like that. I saw it as procrastinating having fun with work. I would see the problem in that of making sure that work occurred while you didn't watch the movie, instead of some useless past time. That's not a problem for you?

There's a similar strategy of "procastinating later". "I'll goof off a little later." I think some argue that you get the satisfaction of the goof off in the anticipation of it, and that helps you continue.

Comment author: kilobug 05 July 2016 11:50:40AM 3 points [-]

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ?

The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.

In response to comment by kilobug on Zombies Redacted
Comment author: buybuydandavis 09 July 2016 12:09:43PM 1 point [-]

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

A wonderful way to dehumanize.

therefore letting go of all forms of caring for others and all morality ?

The meat bag you ride will let go of caring, or not.

Under the theory, the observer chooses nothing in the physical world. The meatbag produces experiences of caring for you, or not, according to his meatbag reasons for action in the world.

In response to comment by Piecewise on Zombies Redacted
Comment author: kilobug 05 July 2016 11:47:44AM 1 point [-]

I agree with your point in general, and it does speak against an immaterial soul surviving death, but I don't think it necessarily apply to p-zombies. The p-zombie hypothesis is that the consciousness "property" has no causality over the physical world, but it doesn't say that there is no causality the other way around: that the state of the physical brain can't affect the consciousness. So a traumatic brain injury would (under some unexplained mysterious mechanism) reflect into that immaterial consciousness.

But sure, it's yet more epicycles.

In response to comment by kilobug on Zombies Redacted
Comment author: buybuydandavis 09 July 2016 11:58:41AM 0 points [-]

You're watching a POV movie of a meat bag living out it's life. When the meat bag falls apart, the movie gets crapped up.

View more: Prev | Next