Comment author: Ebthgidr 19 December 2014 06:12:12PM *  0 points [-]

Oh, that's what I've been failing to get across.

I'm not saying if not(p) then (if provable(p) then q). I'm saying if not provable(p) then (if provable(p) then q)

Comment author: DanielFilan 20 December 2014 06:10:52AM *  0 points [-]

I'm saying if not provable(p) then (if provable(p) then q)

You aren't saying that though. In the post where you numbered your arguments, you said (bolding mine)

if not(provable(P)) then provable(if provable(P) then P)

which is different, because it has an extra 'provable'.

Comment author: Ebthgidr 19 December 2014 12:40:39AM 0 points [-]

So the statement (if not(p) then (if p then q)) is not provable in PA? Doesn't it follow immediately from the definition of if-then in PA?

Comment author: DanielFilan 19 December 2014 07:41:26AM 0 points [-]

(if not(p) then (if p then q)) is provable. What I'm claiming isn't necessarily provable is (if not(p) then provable(if provable(p) then q)), which is a different statement.

Comment author: Ebthgidr 18 December 2014 07:59:31PM *  0 points [-]

That doesn't actually answer my original question--I'll try writing out the full proof.

Premises:

  1. P or not-P is true in PA

  2. Also, because of that, if p -> q and not(p)-> q then q--use rules of distribution over and/or

So: 1. provable(P) or not(provable(P)) by premise 1

2: If provable(P), provable(P) by: switch if p then p to not p or p, premise 1

3: if not(provable(P)) Then provable( if provable(P) then P): since if p then q=not p or q and not(not(p))=p

4: therefore, if not(provable(P)) then provable(P): 3 and Lob's theorem

5: Therefore Provable(P): By premise 2, line 2, and line 4.

Where's the flaw? Is it between lines 3 and 4?

Comment author: DanielFilan 18 December 2014 09:17:27PM 0 points [-]

I think step 3 is wrong. Expanding out your logic, you are saying that if not(provable(P)), then (if provable(P) then P), then provable(if provable(P) then P). The second step in this chain is wrong, because there are true facts about PA that we can prove, that PA cannot prove.

Comment author: spxtr 18 December 2014 03:49:52AM *  3 points [-]

I made a plot of the entropy and the (correct) energy. Every feature of these plots should make sense.

Note that the exponential turn-on in E(T) is a common feature to any gapped material. Semiconductors do this too :)

Comment author: DanielFilan 18 December 2014 11:05:17AM *  1 point [-]

The energy/entropy plot makes total sense, the energy/temperature doesn't really because I don't have a good feel for what temperature actually is, even after reading the "Temperature" section of your argument (it previously made sense because Mathematica was only showing me the linear-like part of the graph). Can you recommend a good text to improve my intuition? Bonus points if this recommendation arrives in the next 9.5 hours, because then I can get the book from my university library.

Comment author: Ebthgidr 18 December 2014 03:30:07AM 0 points [-]

Well, there is, unless i misunderstand what meta level provable(not(provable(consistency))) is on.

Comment author: DanielFilan 18 December 2014 10:54:58AM 0 points [-]

I think you do misunderstand that, and that the proof of not(provable(consistency(PA))) is not in fact in PA (remember that the "provable()" function refers to provability in PA). Furthermore, regarding your comment before the one that I am responding to now, just because not(provable(C)) isn't provable in PA, doesn't mean that provable(C) is provable in PA: there are lots of statements P such that neither provable(P) nor provable(not(P)), since PA is incomplete (because it's consistent).

Comment author: Lumifer 17 December 2014 07:21:12PM 2 points [-]

so temperature is in the mind

I am not quite sure in which way this statement is useful.

"..and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing." -- Douglas Adams

Comment author: DanielFilan 18 December 2014 12:50:15AM 3 points [-]

I had that thought as well, but the 'Second Law Trickery' section convinced me that it was a useful statement.

Comment author: Ebthgidr 17 December 2014 05:44:27PM 0 points [-]

Your reasons were that not(provable(c)) isn't provable in PA, right? If so, then I will rebut thusly: the setup in my comment immediately above(I.e. either provable(c) or not provable(c)) gets rid of that.

Comment author: DanielFilan 18 December 2014 12:47:12AM 0 points [-]

I'm not claiming that there is no proposition C such that not(provable(C)), I'm saying that there is no proposition C such that provable(not(provable(C))) (again, where all of these 'provable's are with respect to PA, not our whole ability to prove things). I'm not seeing how you're getting from not(provable(not(provable(C)))) to provable(C), unless you're commuting 'not's and 'provable's, which I don't think you can do for reasons that I've stated in an ancestor to this comment.

Comment author: DanielFilan 17 December 2014 10:46:22AM *  4 points [-]

[Spoiler alert: I can't find any 'spoiler' mode for comments, so I'm just going to give the answers here, after a break, so collapse the comment if you don't want to see that]

.

.

.

.

.

.

.

.

.

.

For the entropy (in natural units), I get

and for the energy, I get

Is this right? (upon reflection and upon consulting graphs, it seems right to me, but I don't trust my intuition for statistical mechanics)

Comment author: ike 17 December 2014 05:10:06AM 0 points [-]

I downloaded the paper you linked to and will read it shortly. I'm totally sympathetic to the "didn't want to make a long comment longer" excuse, having felt that way many times myself.

I agree in the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can't know for sure that I live in a multiverse, which is one of the reasons I'm still alive in your world (the main reason being it's not practical for me right now, and I'm not really confident enough to bother researching and setting something like that up.) However, you also don't know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I'd say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)

Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing.

I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.

If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don't have to worry about it. That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That's different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don't want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:

"What would happen if the Dust won?" asked the hero. "Would the whole world be destroyed in a single breath?"

Aerhien's brow quirked ever so slightly. "No," she said serenely. Then, because the question was strange enough to demand a longer answer: "The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them. The Dust is patient in its will to destruction."

The hero flinched, then bowed his head. "I suppose that was too much to hope for; there wasn't really any reason to hope, except hope... it's not required by the logic of the situation, alas..."

I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)

I'm actually planning to write a post about Big Worlds, anthropics, and some other topics, but I've got other things and am continuously putting it off. Eventually. I'd ideally like to finish some anthropics books and papers, including Bostrom's, first.

Comment author: DanielFilan 17 December 2014 07:52:55AM *  0 points [-]

Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.

Also - what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!

Comment author: ike 17 December 2014 05:10:06AM 0 points [-]

I downloaded the paper you linked to and will read it shortly. I'm totally sympathetic to the "didn't want to make a long comment longer" excuse, having felt that way many times myself.

I agree in the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can't know for sure that I live in a multiverse, which is one of the reasons I'm still alive in your world (the main reason being it's not practical for me right now, and I'm not really confident enough to bother researching and setting something like that up.) However, you also don't know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I'd say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)

Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing.

I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.

If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don't have to worry about it. That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That's different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don't want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:

"What would happen if the Dust won?" asked the hero. "Would the whole world be destroyed in a single breath?"

Aerhien's brow quirked ever so slightly. "No," she said serenely. Then, because the question was strange enough to demand a longer answer: "The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them. The Dust is patient in its will to destruction."

The hero flinched, then bowed his head. "I suppose that was too much to hope for; there wasn't really any reason to hope, except hope... it's not required by the logic of the situation, alas..."

I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)

I'm actually planning to write a post about Big Worlds, anthropics, and some other topics, but I've got other things and am continuously putting it off. Eventually. I'd ideally like to finish some anthropics books and papers, including Bostrom's, first.

Comment author: DanielFilan 17 December 2014 07:43:04AM 0 points [-]

In the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability.

Here's the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn't take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don't exist - not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die - not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.

Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing.

I'm not sure what you can mean by this comment, especially "the whole problem". My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).

I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses.

... I promise that you aren't going to be able to perform a test on a qubit that you can expect to tell you with 100% certainty that , even if you have multiple identical qubits.

You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.

This wasn't my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it's like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can't distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.

If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking, noise generation, and checking is done while I sleep, so I don't have to worry about it.

I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.

That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer exist.

I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn't be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you're pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.

I think we do have some more fundamental disagreement about anthropics, which I don't want to argue over until I hash out my viewpoint more.

I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don't take the suicide bet in the single-world case, I think that we probably don't.

View more: Prev | Next