casebash comments on Consequences of the Non-Existence of Perfect Theoretical Rationality - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
Well, we don't actually need infinite suffering, just a large unknown unbounded number.
In order to guarantee being able to deliver whatever utility change the player demands in the way you describe, Omega needs there to be an infinite amount of suffering to relieve.
[EDITED to add:] If whoever downvoted this would like to explain my error, I'd be interested. It looks OK to me. Or was it just Eugine/Azathoth/Ra/Lion punishing me for not being right-wing enough again?
(retracted)
I made no claim that those are the only two possibilities. But, for what it's worth, here are the options I actually see. First, "legitimate" ones where someone read what I wrote, thought it was bad, and voted it down on that ground:
Then there are the options where the downvote was not on the basis of (actual or perceived) problems with the comment itself:
So. It looks to me like there are lots of low-probability explanations, plus "someone thinks I made a dumb mistake", plus "Eugine/Azathoth/Ra wanted to downvote something I wrote", both of which are things that have happened fairly often and good candidates for what's happened here. And if someone thinks I made a dumb mistake, it seems like explaining what would be a good idea (whether the mistake is mine or theirs). Hence my comment.
(This feels like about two orders of magnitude more analysis than this trivial situation warrants, but no matter.)
On reflection, I see that you're right; I inferred too much from your comment. What you said was that you'd be interested in an explanation of your error, if and only if you committed one; followed by asking the separate, largely independent question of whether Eugine/Azathoth/Ra/Lion was punishing you for not being right-wing enough again. I erroneously read your comment as saying that you'd be interested in (1) an explanation of your error or (2) the absence of such an explanation, which would prove the Eugine hypothesis by elimination. Sorry for jumping the gun and forcing you into a bunch of unnecessary analysis.
No problem.
Indeed I was not claiming that the absence of an explanation would prove it was Eugine. It might simply mean that whoever downvoted me didn't read what I wrote, or that for whatever reason they didn't think it would be worth their while to explain. Or the actual reason for the downvote could be one of those low-probability ones.
One correction, though: I would be interested in an explanation of my error if and only if whoever downvoted me thinks I committed one. Even if in fact I didn't, it would be good to know if I failed to communicate clearly, and good for them to discover their error.
And now I shall drop the subject. (Unless someone does in fact indicate that they downvoted me for making a mistake and some sort of correction or clarification seems useful.)
(retracted)
Ah, I hadn't taken in that the person complaining rudely that I hadn't considered all the possibilities for why I got downvoted might be the person who downvoted me. In retrospect, I should have.
Anyway (and with some trepidation since I don't much relish getting into an argument with someone who may possibly just take satisfaction in causing petty harm): no, it doesn't look to me as if casebash's arguments are much like 2+2=5, nor do I think my comments are as obvious as pointing out that actually it's 4. The sort of expected-utility-maximizing that's generally taken around these parts to be the heart of rationality really does have difficulties in the presence of infinities, and that does seem like it's potentially a problem, and whether or not casebash's specific objections are right they are certainly pointing in the direction of something that could use more thought.
I do not think I have ever encountered any case in which deliberately making a problem worse to draw attention to it has actually been beneficial overall. (There are some kinda-analogous things in realms other than human affairs, such as vaccination, or deliberately starting small forest fires to prevent bigger ones, but the analogy isn't very close.)
If indeed LW has become irredeemably shit, then amplifying the problem won't fix it (see: definition of "irredeemably") so you might as well just fuck off and do something less pointless with your time. If it's become redeemably shit, adding more shit seems unlikely to be the best way of redeeming it so again I warmly encourage you to do something less useless instead. But these things seem so obvious -- dare I say it, so much like pointing out that 2+2=4? -- that I wonder whether, deep down, under the trollish exterior, there lurks a hankering for something better. Come to the Light Side! We have cookies.
LOL.
(retracted)
I can has raisin?
Not me.
Is there a practical difference between infinity and "large unknown unbounded number"? It still falls into the issue of "is the number large enough", and if it's not, you missed suffering you could've alleviated. So from an in-universe[1] viewpoint there is no reason not to state googol^googol, alphabetagammadeltaomega-quadtribillion[2], or the total length of of a all possible 4096-bit public keys, or some other horror that might make make reality crash.
[1] I'm assuming we're still in the "timeless celestial beings" universe. [2] I'm making stuff up.
If you create an actual infinity then things get weird. Many intuitive rules don't hold. So I don't want an actual infinity.
But a large, unknown number could easily be some sort of infinity.
Let's look at it another way. Say I choose some unknown number as you described. Any reason I couldn't be enlightened by "well, if you had chosen number+1, you could have saved the universe"?
I definitely am lacking in my mathematical knowledge so if there's a way to deal with un-measured numbers I'd appreciate if someone could enlighten me.
"But a large, unknown number could easily be some sort of infinity." - it could if I hadn't specified that we are assuming it is finite.
Then the best decision is to make some calculations, say, how much suffering per 1m/km2 on average, multiply that by how much of the universe you can observe, then add an incredibly large amount of 9s to it's right side. Use all the excess utility to expand your space travel and observation and save the other planets from suffering.