Dweomite

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dweomite1910

Though I do somewhat wish there was a section here that reviews the plot, for those of us who are curious about what happens in the book without reading 1M+ words.

I think I could take a stab at a summary.

This is going to elide most of the actual events of the story to focus on the "main conflict" that gets resolved at the end of the story. (I may try to make a more narrative-focused outline later if there's interest, but this is already quite a long comment.)

As I see it, the main conflict (the exact nature of which doesn't become clear until quite late) is mainly driven by two threads that develop gradually throughout the story... (major spoilers)

The first thread is Keltham's gradual realization that the world of Golarion is pretty terrible for mortals, and is being kept that way by the power dynamics of the gods.

The key to understanding these dynamics is that certain gods (and coalitions of gods) have the capability to destroy the world. However, the gods all know (Eliezer's take on) decision theory, so you can't extort them by threatening to destroy the world. They'll only compromise with you if you would honestly prefer destroying the world to the status quo, if those were your only two options. (And they have ways of checking.) So the current state of things is a compromise to ensure that everyone who could destroy the world, prefers not to.

Keltham would honestly prefer destroying Golarion (primarily because a substantial fraction of mortals currently go to hell and get tortured for eternity), so he realizes that if he can seize the ability to destroy the world, then the gods will negotiate with him to find a mutually-acceptable alternative.

Keltham speculates (though it's only speculation) that he may have been sent to Golarion by some powerful but distant entity from the larger multiverse, as the least-expensive way of stopping something that entity objects to.

The second thread is that Nethys (god of knowledge, magic, and destruction) has the ability to see alternate versions of Golarion and to communicate with alternate versions of himself, and he's seen several versions of this story play out already, so he knows what Keltham is up to.  Nethys wants Keltham to succeed, because the new equilibrium that Keltham negotiates is better (from Nethys' perspective) than the status quo.

However, it is absolutely imperative that Nethys does not cause Keltham to succeed, because Nethys does not prefer destroying the world to the status quo.  If Keltham only succeeds because of Nethys' interventions, the gods will treat Keltham as Nethys' pawn, and treat Keltham's demands as a threat from Nethys, and will refuse to negotiate.

So Nethys can only intervene in ways that all of the major gods will approve of (in retrospect).  So he runs around minimizing collateral damage, nudges Keltham towards being a little friendlier in the final negotiations, and very carefully never removes any obstacle from Keltham's path until Keltham has proven that he can overcome it on his own.

Nethys considers is likely that this whole situation was intentionally designed as some sort of game, by some unknown entity.  (Partly because Keltham makes several successful predictions based on dath ilani game tropes.)

At the end of the story, Keltham uses an artifact called the starstone to turn himself into a minor god, then uses his advanced knowledge of physics (unknown to anyone else in the setting, including the gods) to create weapons capable of destroying the world, announces that that's his BATNA, and successfully negotiates with the rest of the gods to shut down hell, stop stifling mortal technological development, and make a few inexpensive changes to improve overall mortal quality-of-life.  Keltham then puts himself into long-term stasis to see if the future of this world will seem less alienating to him than the present.

Sounds like you agree with both me and Ninety-Three about the descriptive claim that the Shapley Value has, in fact, been changed, and have not yet expressed any position regarding the normative claim that this is a problem?

I'm not sure what you're trying to say.

My concern is that if Bob knows that Alice will consent to a Shapley distribution, then Bob can seize more value for himself without creating new value.  I feel that a person or group shouldn't be able to get a larger share by intentionally hobbling themselves.

If B1 and B2 structure their cartel such that each of them gets a veto over the other, then the synergies change so that A+B1 and A+B2 both generate nothing, and you need A+B1+B2 to make the $100, which means B1 and B2 each now have a Shapley value of $33.3 (up from $25).

Also, I wouldn't describe the original Shapley Values as "no coordination".  With no coordination, there's no reason the end result should involve paying any non-zero amount to both B1 and B2, since you only need one of them to assent.  I think Shapley Values represent a situation that's more like "everyone (including Alice) coordinates".

A problem I have with Shapley Values is that they can be exploited by "being more people".

Suppose Alice and Bob can make a joint venture with a payout of $300.  Synergies:

  • A: $0
  • B: $0
  • A+B: $300

Shapley says they each get $150.  So far, so good.

Now suppose Bob partners with Carol and they make a deal that any joint ventures require both of them to approve; they each get a veto.  Now the synergies are:

  • A+B: $0 (Carol vetoes)
  • A+C: $0 (Bob vetoes)
  • B+C: $0 (venture requires Alice)
  • A+B+C: $300

Shapley now says Alice, Bob, and Carol each get $100, which means Bob+Carol are getting more total money ($200) than Bob alone was ($150), even though they are (together) making exactly the same contribution that Bob was paid $150 for making in the first example.

(Bob personally made less, but if he charges Carol a $75 finder's fee then Bob and Carol both end up with more money than in the first example, while Alice ends up with less.)

By adding more partners to their coalition (each with veto power over the whole collective), the coalition can extract an arbitrarily large share of the value.

Seems like that guy has failed to grasp the fact that some things are naturally more predictable than others.  Estimating how much concrete you need to build a house is just way easier than estimating how much time you need to design and code a large novel piece of software (even if the requirements don't change mid-project).

Is that error common?  I can only recall encountering one instance of it with surety, and I only know about that particular example because it was signal-boosted by people who were mocking it.

I'm confused about how continuity poses a problem for "This sentence has truth value in [0,1)" without also posing an equal problem for "this sentence is false", which was used as the original motivating example. 

I'd intuitively expect "this sentence is false" == "this sentence has truth value 0" == "this sentence does not have a truth value in (0,1]"

Dweomite103

On my model, the phrase "I will do X" can be either a plan, a prediction, or a promise.

A plan is what you intend to do.

A prediction is what you expect will happen.  ("I intend to do my homework after dinner, but I expect I will actually be lazy and play games instead.")

A promise is an assurance.  ("You may rely upon me doing X.")

How about this: I train on all available data, but only report performance for the lots predicted to be <$1000?

This still feels squishy to me (even after your footnote about separately tracking how many lots were predicted <$1000). You're giving the model partial control over how the model is tested.

The only concrete abuse I can immediately come up with is that maybe it cheats like you predicted by submitting artificially high estimates for hard-to-estimate cases, but you miss it because it also cheats in the other direction by rounding down its estimates for easier-to-predict lots that are predicted to be just slightly over $1000.

But just like you say that it's easier to notice leakage than to say exactly how (or how much) it'll matter, I feel like we should be able to say "you're giving the model partial control over which problems the model is evaluated on, this seems bad" without necessarily predicting how it will matter.

My instinct would be to try to move the grading closer to the model's ultimate impact on the client's interests.  For example, if you can determine what each lot in your data set was "actually worth (to you)", then perhaps you could calculate how much money would be made or lost if you'd submitted a given bid (taking into account whether that bid would've won), and then train the model to find a bidding strategy with the highest expected payout.

But I can imagine a lot of reasons you might not actually be able to do that: maybe you don't know the "actual worth" in your training set, maybe unsuccessful bids have a hard-to-measure opportunity cost, maybe you want the model to do something simpler so that it's more likely to remain useful if your circumstances change.

Also you sound like you do this for a living so I have about 30% probability you're going to tell me that my concerns are wrong-headed for some well-studied reason I've never heard of.

Load More