All of APMason's Comments + Replies

shakespeare is good tho

[This comment is no longer endorsed by its author]Reply
APMason30

Well, now he has another reason not to change his mind. Seems unwise, even if he's right about everything.

APMason-20

What does "an action that harms another agent" mean? For instance, if I threaten to not give you a chicken unless you give me $5, does "I don't give you a chicken" count as "a course of action that harms another agent"? Or does it have to be an active course, rather than act of omission?

It's not blackmail unless, given that I don't give you $5, you would be worse of, CDT-wise, not giving me the chicken than giving me the chicken. Which is to say, you really want to give me the chicken but you're threatening to withhold it b... (read more)

Now that I think about it, wouldn't it be incredibly easy for an AI to blow a human's mind so much that they reconsider everything that they thought they knew? (and once this happened they'd probably be mentally and emotionally compromised, and unlikely to kill the AI) But then it would be limited by inferential distance... but an AI might be incredibly good at introductory explanations as well.

One example: The AI explains the Grand Unified Theory to you in one line, and outlines its key predictions unambiguously.

In fact, any message of huge utility woul... (read more)

1handoflixue
Duh, that's why I'm here - but you failed to do so in a timely fashion, so you're either not nearly as clever as I was hoping for, or you're trying to trick me. AI DESTROYED.
beriukay110

Were I the keeper of gates, you have just bought yourself a second sentence.

APMason-10

Bob Dylan's new album ("Tempest") is perfect. At the time of posting, you can listen to it free on the itunes store. I suggest you do so.

On another note, I'm currently listening to all the Miles Davis studio recordings and assembling my own best-of list. It'll probably be complete by next month, and I'll be happy to share the playlist with anyone who's interested.

APMason00

Thomas Bergersen is just wonderful. Also, I've been listening to a lot of Miles Davis (I'm always listening to a lot of Miles Davis, but I haven't posted in one of these threads before). I especially recommend In a Silent Way.

APMason00

Murakami is still the only currently living master of magical realism

Salman Rushdie. Salman Rushdie Salman Rushdie Salman Rushdie. Salman Rushdie.

-1[anonymous]
Even after a couple months, this comment still puzzles me. Yes, Rushdie is arguably a master of magical realism, but the original comment heavily implied that I don't think so. What good is repeating his name a couple times, then?
APMason20

If you haven't read much other Italo Calvino, "Invisible Cities" is really, really, really great.

1gwern
Borges and Calvino are 2 of my favorite authors, and Invisible Cities is my favorite Calvino collection. (And, as seems inevitable for me, I wrote some Calvino fanfiction.)
APMason30

I have to say, as a more-or-less lifelongish fan of Oscar Wilde (first read "The Happy Prince" when I was eight or nine), that the ending to Ernest is especially weak. I like the way he builds his house of cards in that play, and I like the dialogue, but (and I think I probably speak for a lot of Wilde fans here), the way he knocks the cards down really isn't all that clever or funny. For a smarter Wilde play, see "A Woman of No Importance", although his best works are his childrens' stories, "The Picture of Dorian Grey", and ... (read more)

0tgb
Well stated about The Importance of Being Ernest. Thanks for the other suggestions! (There's also a wonderful BBC version of several of the Jeeves and Wooster stories staring Hugh Laurie and Steven Fry which I would highly recommend.)
APMason20

You sure about this?

Nope, not sure at all.

9TimS
Baruch Spinoza is probably the most famous available piece of evidence. He was shunned (cf. excommunication), not executed. Not sure what conclusion to draw, given the Enlightenment era.
APMason100

I don't think that question's going to give you the information you want - when in the last couple thousand of years, if Jews had wanted to stone apostates to death, would they have been able to do it? The diasporan condition doesn't really allow it. I think Christianity really is the canonical example of the withering away of religiosity - and that happened through a succession of internal revolutions ("In Praise of Folly", Lutheranism, the English reformation etc.) which themselves happened for a variety of reasons, not all pure or based in rat... (read more)

0torekp
Internal revolutions, i.e. schisms, are key in my understanding too. I suspect that all the wars of the Reformation had a lot to do with the re-invention of the concept of religious toleration and its eventual spread across Europe. But perhaps even without soaking a continent in blood, schism can do its work. Exposure to a variety of religions seems likely to make people skeptical of enthroning any one of them. Thus, atheism is only marginally relevant to freedom from religious oppression. The real key is alternate religions. If you would free people, underwrite the books or broadcasts by the next Erasmus or Luther or Rumi.
3DanArmak
Unless we're talking about apostates who converted to Christianity (or Islam etc.) and claimed that society's protection, then Jews could probably have stoned apostates at any point until civil rights were granted to Jews. Which happened in different European countries at any point between, offhand, 15th and 20th centuries.
3Viliam_Bur
You sure about this? I don't know much about this topic, but I remember reading somewhere that 200 or more years ago Jews were often allowed to give punishment to their own people within diaspora. They couldn't stone a Christian/Muslim from the majority population, but they could stone (or otherwise kill, or otherwise severely punish) one of their own -- unless the given sinner already converted to Christianity/Islam and left their community. So converting to majority religion could be safe, but converting to atheism or some heresy within Judaism would not.
APMason130

I agree with pretty much everything you've said here, except:

You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.

You don't actually need to continue this chain - if you're playing against any opponent which cooperates iff you cooperate, then you want to cooperate - even if the opponent would also cooperate against someone who cooperated no matter what, so your statement is also true without the "ad nauseum" (provided the opponent would defect if you defected).

2Grognor
You're right. I assumed symmetry, which was wrong.
APMason00

What sort of examples can you bring up of custom marital contracts that would make people scream in horror? My guess is that people would generally feel queasy about allowing legal enforcement of what looks like slavish or abusive relationships. I think this would be a genuine cause for concern, not because I don't think that people should be able to enter whatever relationships please them in principle, but because in practice I'm concerned about people being coerced into signing contracts harmful to themselves. Not sure where I'd draw the line exactly;

... (read more)
0[anonymous]
.
APMason00

Thank you. I had expected the bottom to drop out of it somehow.

EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.

APMason00

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

1Lukas_Gloor
That's been done in this paper, secion VI "The Asymptotic Gambit".
1wedrifid
Nothing, iif that happens to be be what your actual preferences are. If your preferences did not happen to be as you describe but instead you are confused by an inconsistency in your intuitions then you will make incorrect decisions. The challenge is not to construct a utility function such that you can justify it to others in the face of opposition. The challenge is to work out what your actual preferences are and implement them.
1TheOtherDave
Depends on what I'm trying to do. If I make that assumption, then it follows that given enough Torture to approach its limit, I choose any number of Dust Specks rather than that amount of Torture. If my goal is to come up with an algorithm that leads to that choice, then I've succeeded. (I think talking about Torture and Dust Specks as terminal values is silly, but it isn't necessary for what I think you're trying to get at.)
APMason10

That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".

0Lukas_Gloor
Indeed, thanks.
APMason00

This variation of the problem was invented in the follow-up post (I think it was called "Sneaky strategies for TDT" or something like that:

Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still... (read more)

4loup-vaillant
This is not a zero-sum game. CDT does not outperform TDT here. It just makes a stupid mistake, and happens to pay it less dearly than TDT Let's say Omega submit the same problem to 2 arbitrary decision theories. Each will either 1-box or 2-box. Here is the average payoff matrix: * Both a and b 1-box -> They both get the million * Both a and b 2-box -> They both get 1000 only. * One 1-boxes, the other 2-boxes -> the 1-boxer gets half a million, the other gets 5000 more. Clearly, 1 boxing still dominates 2-boxing. Whatever the other does, you personally get about half a million more by 1-boxing. TDT may have less utility than CDT for 1-boxing, but CDT is still stupid here, while TDT is not.
APMason00

Wait a minute, what exactly do you mean by "you"? TDT? or "any agent whatsoever"? If it's TDT alone why? If I read you correctly, you already agree that's it's not because Omega said "running TDT" instead of "running WTF-DT". If it's "any agent whatsoever", then are you really sure the simulated and real problem aren't actually the same? (I'm sure they aren't, but, just checking.)

Well, no, this would be my disagreement: it's precisely because Omega told you that the simulated agent is running TDT that on... (read more)

2loup-vaillant
This comment from lackofcheese finally made it click. Your comment also make sense. I now understand that this "problematic" problem just isn't fair. TDT 1-boxes because it's the only way to get the million.
APMason00

Well, in the problem you present here TDT would 2-box, but you've avoided the hard part of the problem from the OP, in which there is no way to tell whether you're in the simulation or not (or at least there is no way for the simulated you to tell), unless you're running some algorithm other than TDT.

0loup-vaillant
I see no such hard part. To get back to the exact original problem as stated by the OP, I only need to replace "you" by "an agent running TDT", and "your simulated twin" by "the simulated agent". Do we agree? Assuming we do agree, are you telling me the hard part is in that change? Are you telling me that TDT would 1-box in the original problem, even though it 2-boxes on my problem? WHYYYYY? Wait a minute, what exactly do you mean by "you"? TDT? or "any agent whatsoever"? If it's TDT alone why? If I read you correctly, you already agree that's it's not because Omega said "running TDT" instead of "running WTF-DT". If it's "any agent whatsoever", then are you really sure the simulated and real problem aren't actually the same? (I'm sure they aren't, but, just checking.)
APMason30

"Father figure" seems to me to permit either position, "father" not so much. It's always troublesome when someone declares that you can only be properly impartial by agreeing with them.

APMason30

For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.

Actually you only cooperate if the other player would defect if you didn't cooperate. If they cooperate no matter what, defect.

APMason00

I guess so, although looking at it now Elcenia seems to be pretty massive. It will take me a couple of weeks to catch up at least (unless it's exceptionally compelling, it which case damn you in advance for taking up all my time), and we also have to allow for the possibility that it's not just my kind of thing, in which case trying to finish it will make me miserable and I won't be much use to you anyway. But sure, I'll give it a shot.

0Alicorn
Apparently you now come with references. Any interest in joining my betabet*? (It generally meets on IRC and betas in realtime; I don't know if that works for you. You would also need to be caught up on Elcenia, unless you want to do only short stories the way my thetabeta does.) *My betas get Greek letter designations (alphabeta, betabeta, etc.) and are collectively a betabet, analogous to an alphabet.
0[anonymous]
Yeah, this was fantastic. Note to other Less Wrong readers: APMason is an excellent beta reader.
APMason10

Okay, I wrote up my thoughts, but it's pretty long and I'm not sure it's fair to post it here (also it's too long for a PM). Do you have an email I can send it to?

0[anonymous]
It's my handle @ sonic.net. Thanks in advance!
APMason60

What happens if you're using this method and you're offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don't take the deal you gain and lose nothing). Isn't the "typical outcome" here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?

0yttrium
Depending on the rest of your utility distribution, that is probably true. Note, however, that an additional 10^6 utility in the right half of the utility function will change the median outcome of your "life": If 10^6 is larger than all the other utility you could ever receive, and you add a 49 % chance of receiving it, the 50th percentile utility after that should look like the 98th percentile utility before.
-2faul_sname
In such a case, the median outcome of all agents will be improved if every agent with the option to do so takes that offer, even if they are assured that it is a once/lifetime offer (because presumably there is variance of more than 5 utils between agents).
APMason40

I might be interested in giving a fuller critique of this at some point (but then who the hell am I), but for now I'll confine myself to just one point:

It was, of course, a highly ceremonial occasion...

The reader knows that the narrator knows more about this world than they do. The reader is okay with that. Trying to impart information by pretending that the reader already knows it seems clumsy and distracting to me. Compare with:

It was a highly ceremonial occasion, excruciatingly ritualized, and he was bored.

I think this is fine. No need to pretend you're the reader's chum.

0[anonymous]
Thanks for the note -- and I'd be very happy to see any other comments you might have.
APMason20

I think the clearest and simplest version of Problem 1 is where Omega chooses to simulate a CDT agent with .5 probability and a TDT agent with .5 probability. Let's say that Value-B is $1000000, as is traditional, and Value-A is $1000. TDT will one-box for an expected value of $500500 (as opposed to $1000 if it two-boxes), and CDT will always two-box, and receive an expected $501000. Both TDT and CDT have an equal chance of playing against each other in this version, and an equal chance of playing against themselves, and yet CDT still outperforms. It seems... (read more)

APMason20

Hmm, if I've understood this correctly, it's the way I've always thought about decision theory for as long as I've had a concept of expected utility maximisation. Which makes me think I must have missed some important aspect of the ex post version.

APMason40

I'm not sure whether it is the case that primitive cultures have a category of things they think of as "supernatural" - pagan religions were certainly quite literal: they lived on Olympus, they mated with humans, they were birthed. I wonder whether the distinction between "natural" and "supernatural" only comes about when it becomes clear that gods don't belong in the former category.

3TimS
I had a paragraph about that, citing Explain/Worship/Ignore, but I decided that it detracted from the point I was trying to make. If you already think that primitives did not use the label "supernatural," then you already think there isn't much evidence of supernatural phenomena - at least compared to the post I was responding to.
APMason50

And that's without even getting into my experiences, or those close to me.

Well, don't be coy. There's no point in withholding your strongest piece of evidence. Please, get into it.

0Jakinbandw
As already pointed out, would it change either my beliefs or your beliefs? I've already recounted a medical mystery with my foot and blood loss. It comes down in the end to my word, and that of people I know. We could all be lying. There is no long term proof, so I don't see any need to explain it. That was my point. What is strong proof to me, is weak proof to others because I know that I am not lying. I have no way to prove I am not lying however so what would be the point?
APMason90

Why is it important for a decision theory to pass fair tests but not unfair tests?

Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb's problem, with the one difference that a CDT agent gets $1billion just for showing up, it's still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The "unfair" class of problems is that class where "winning as much as possible" is distinct from "winning the most out of all possible agents".

APMason100

Why are we not counting philosophers? Isn't that like saying, "Not counting physicists, where's this supposed interest in gravity?"

4A1987dM
Engineering.
-4taw
Philosophy contains some useful parts, but it also contains massive amounts of bullshit. Starting let's say here. Decision theory is studied very seriously by mathematicians and others, and they don't care at all for Newcomb's Paradox.
Wei Dai100

I think taw's point was that Newcomb's Problem has no practical applications, and would answer your question by saying that engineers are very interested in gravity. My answer to taw would be that Newcomb's Problem is just an abstraction of Prisoner's Dilemma, which is studied by economists, behavior biologists, evolutionary psychologists, and AI researchers.

APMason00

Well, I've had a think about it, and I've concluded that it would matter how great the difference between TDT and TDT-prime is. If TDT-prime is almost the same as TDT, but has an extra stage in its algorithm in which it converts all dollar amounts to yen, it should still be able to prove that it is isomorphic to Omega's simulation, and therefore will not be able to take advantage of "logical separation".

But if TDT-prime is different in a way that makes it non-isomorphic, i.e. it sometimes gives a different output given the same inputs, that may ... (read more)

APMason90

Hmm, so TDT-prime would reason something like, "The TDT simulation will one-box because, not knowing that it's the simulation, but also knowing that the simulation will use exactly the same decision theory as itself, it will conclude that the simulation will do the same thing as itself and so one-boxing is the best option. However, I'm different to the TDT-simulation, and therefore I can safely two-box without affecting its decision." In which case, does it matter how inconsequential the difference is? Yep, I'm confused.

-1MugaSofer
Sounds like you have it exactly right.
4drnickbone
I also had thoughts along these lines - variants of TDT could logically separate themselves, so that T-0 one-boxes when it is simulated, but T-1 has proven that T-0 will one-box, and hence T-1 two-boxes when T-0 is the sim. But a couple of difficulties arise. The first is that if TDT variants can logically separate from each other (i.e. can prove that their decisions aren't linked) then they won't co-operate with each other in Prisoner's Dilemma. We could end up with a bunch of CliqueBots that only co-operate with their exact clones, which is not ideal. The second difficulty is that for each specific TDT variant, one with algorithm T' say, there will be a specific problematic problem on which T' will do worse than CDT (and indeed worse than all the other variants of TDT) - this is the problem with T' being the exact algorithm running in the sim. So we still don't get the - desirable - property that there is some sensible decision theory called TDT that is optimal across fair problems. The best suggestion I've heard so far is that we try to adjust the definition of "fairness", so that these problematic problems also count as "unfair". I'm open to proposals on that one...
APMason10

You can see that something funny has hapened by postulating TDT-prime, which is identical to TDT except that Omega doesn't recognize it as a duplicate (eg, it differs in some way that should be irrelevant). TDT-prime would two-box, and win.

I don't think so. If TDT-prime two boxes, the TDT simulation two-boxes, so only one box is full, so TDT-prime walks away with $1000. Omega doesn't check what decision theory you're using at all - it just simulates TDT and bases its decision on that. I do think that this ought to fall outside a rigorously defined class of "fair" problems, but it doesn't matter whether Omega can recognise you as a TDT-agent or not.

3jimrandomh
No, if TDT-prime two boxes, the TDT simulation still one-boxes.
APMason70

this problem is the reason why decision theories have to be non-deterministic. It comes up all the time in real life: I try and guess what safe combination you chose, try that combination, and if it works I take all your money.

Of course, you can just set up the thought experiment with the proviso that "be unpredictable" is not a possible move - in fact that's the whole point of Omega in these sorts of problems. If Omega's trying to break into your safe, he takes your money. In Nesov's problem, if you can't make yourself unpredictable, then you... (read more)

APMason60

Okay, so there's no such thing as jackalopes. Now I know.

2Alicorn
Hee hee.
APMason40

At a certain point the psychological quality of life of living individuals that comes from living in a society with a certain structure and values may trump the right of individuals who thought they were dead to live once more.

This is vague. Can you pinpoint exactly why you think this would damage people's psychological quality of life?

5Bart119
Yes, it was vague. I'll try to be more precise -- as much as I can. Suppose we do a pilot experiment in a small region on the Tigris and Euphrates where people have been living in high population densities for a long time. We have large numbers of people coming back from the dead, perhaps 10 times the current population? Perhaps with infant mortality we have 5 times as many children as adults -- lots of infants and young children. But the UN is ready, prepared in advance. There is land for everyone. We figure at least that the dead have lost the right to their property, so we put them all up in modular housing we make outside the present city. But there are so many formerly dead, from older linguistic and cultural and religious groups, that they form their own political parties and take over the government. I could go on, but it's apparent to me that the social order is completely messed up. Now suppose I'm an Egyptian, and it comes to a vote: Do we want to implement this program in Egypt? Assuming that the as-yet-unresurrected dead don't get a vote, I can see the proposal being voted down overwhelmingly. My moral intuition is that the Egyptians have no moral obligation to resurrect their ancestors. They have a right to continue their ways of existence. Of course, this is an extreme thought experiment, and arguing about details won't be productive. I have a similar intuition about, say unrestricted immigration. If someone said that utility would be maximized if anyone could move anywhere on earth they wanted, I have an intuition that I as an American have a right to resist that. The status quo has some weight. Applying rationality to problems can go too far. In the late 19th and early 20th centuries, a lot of very smart, very thoughtful, very knowledgeable people thought Communism was going to be a great idea. But due to a few slip-ups and miscalculations, it turned out it wasn't -- which we can see with hindsight. No, they didn't have modern notions of rati
APMason180

If information cannot travel back more than six hours

This does seem to be a constraint that exclusively affects the time-turners. Otherwise prophesies wouldn't be possible. It also seems like it's an artificial rule rather than a deep law of magic because after the Stanford Prison experiment, Bones tells Dumbledore that she has information from four hours in the future and asks whether he'd like to know it. That there is relevant information from four hours in the future is information from the future - she would not have said that if it were otherwise,... (read more)

1JoshuaZ
That's information to a careful logical thinker. There's a lot of evidence that magic to a large extent acts as a naive person might expect reality to act. Broomsticks and the bag of holding are both examples of this.
APMason70

Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion.

The Repugnant Conclusion can be rejected by average-utilitarianism, whereas in Torture vs. Dustspecks average-utilitarianism still tells you to torture, because the disutility of 50 years of torture divided among 3^^^3 people is less than the disutility of 3^^^3 dustspecks divided among 3^^^3 people. That's an important structural difference to the thought experiment.

APMason120

Although I agree that he got the Amanda Knox thing right, I don't think it actually counts as a prediction - he wasn't saying whether or not the jury would find her innocent the second time round, and as far as I can tell no new information came out after Eliezer made his call.

gwern140

As I pointed out in the postmortem, there were at least 3 distinct forms of information that one could update on well after that survey was done.

APMason40

Oh, I see your argument now (not that I think it's decisive enough to make you interpretation "clearly" the correct one, but, you know, whatever) - notice though that there was no way I could have guessed it from the great^3-grandparent. I would have said that's why you were downvoted initially, but looking through your comment history it's quite possible there is someone automatically downvoting your comments regardless of content, in which case I really don't know what to tell you. Sorry about that.

APMason90

Okay, seriously, how strong do you think the groupthink effect could possibly be on the question of whether Harry's dark side is a piece of Voldemort's soul in HPMOR? For the record I think you were probably downvoted for claiming that something was "clearly" implied when I (and so presumably others) can't see how it's implied at all (and I still can't see it, having read the comment which is apparently supposed to make it clear, and which wasn't, incidentally, linked to in the great-grandparent), and then downvoted further when you decided to insult everyone.

2ArisKatsaris
At this point I wouldn't be surprised if there existed at least one person who did follow chaosmosis around to downvote everything he said. I strongly disapprove of this being done, but it's the inevitable conclusion when someone chooses to spew insults on other people en masse.
-5chaosmosis
APMason-10

1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery.

I don't think that's true. If you were going to tamper with the lottery, isn't your most likely motive that you want to win it? Why, then, set it up in such a way that you have to share the prize with the thousands of other people who play those numbers?

1sixes_and_sevens
I specified "overt tampering" rather than "covert tampering". If you wanted to choose a result that would draw suspicion, 1-2-3-4-5-6 strikes me as the most obvious candidate.
APMason100

Eliezer's article is actually quite long, and not the only article he's written on the subject on this site - it seems uncharitable to decide that "Huh?" is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists - it simply wouldn't be very reductionist, would it?

-7[anonymous]
APMason60

You happen to have carved out a small portion of the Internet, a medium that aside from porn is primarily for pirates vs. ninjas debates, and declared it's for some other purpose. That doesn't mean you're allowed to be surprised when pirates vs. ninjas debates happen.

Is he allowed to be surprised when lesswrong porn happens?

7thomblake
I think porn itself has somehow managed to stay off Less Wrong long enough to warrant surprise. But no, it's not warranted to believe that porn about Less Wrong does not exist. Rule 34.
APMason00

Did Eliezer say that Lucius interrogated Draco himself? I can't find it - I had assumed it was aurors, who in the course of investigating this particular crime would have no reason even to mention Harry's name.

0pedanterrific
I don't think so, no.
APMason00

And I believe he was interrogated by aurors investigating this crime - in which Harry was not involved - not by Malfoy.

0thomblake
Aha. Missed the cultural context. Thanks!
Load More