Dagon

Just this guy, you know?

Wiki Contributions

Comments

Dagon40

All models are wrong, some models are useful.  

Examples of things with structure

  • the sequence 01010101 has a repetitive structure
  • The US interstate system has a graph structure
  • Monopoly has a game-theoretic structure

I think this is wrong.  Replace "have/has" with "can be modeled as a".  We don't know if that structure is causal and singular to that output.  I think recognizing that inferred structures are not exclusive or complete is useful here - many possible structures can generate very similar outputs.  If your goal is to predict future outputs, you probably need to find the actual generator, not pick a structure that could be it.  

No actual agents will be exactly any of those structures, it'll be a mix of them, and some other less-legible ones.  I'd probably also argue that ANY of those structures can be scary, and in fact the utility-optimizing agent can use ANY of the decision types (table, rule, or model), because they're all equivalent, with sufficient complexity of {table, ruleset, or world-model).  

Dagon20

Those seem like pretty low bars for "popular and mainstream".

Dagon30

This seems to be asking from the demand side ("we" being people with lots of money who want to hire trained people), but then switches to demand side (people being turned away looking for training and employment).  

I think that's a hint to your answer: other industries solve it by actually hiring lots of people, and offering training on the job or with regular programs.  Oh, and usually waiting for equilibrium to catch up, which is not comfortable for rapid-change requirements.

It would perhaps clarify your question if you gave some examples of industries/topics that HAVE faced and solved the question, and we can focus on the "how".

Dagon42

Nope!  Parfit's Hitchhiker is designed to show exactly this.  A CDT agent will desperately wish for some way to actually commit to paying.  

I think some of the confusion in this thread is what "CDT with precommittment (or really, commitment)" actually means.  It doesn't mean "intent" or "plan".  It means "force" - throw the steering wheel out the window, so there IS NO later decision.  Note also that humans aren't CDT agents, they're some weird crap that you need to squint pretty hard to call "rational" at all.

Dagon20

I should have specified WHO they want to cooperate with in the future.  People with lots of money to spend - businesses.  Silence is far preferable to badmouthing former coworkers.

Dagon20

First rule of probability and decision theory: no infinities!  If you want to postulate very large numbers, go ahead, but be prepared to deal with very tiny probabilities.

Pascal's wager is a good example - the chance that the wager actually pays off based on this decision is infinitesimal (not zero, but small enough that I can't really calculate with it), which makes it irrelevant how valuable it is.  This gets even easier with the multitude of contradictory wagers on offer - "infinite value" from many different choices, only one of which can you take.  Mostly, take the one(s) with lower value but actually believable conditional probability.

Dagon42

There will be no paperclips if planet and maximizer are destroyed.

There might be - some paperclips could survive a comet.  More importantly, one paperclip's worth of resources won't change the chance of a comet collision by any measurable amount, so the choice is either "completely waste that energy" or "make a paperclip that might survive".

Answer by Dagon20

Only if every other entity's anti-paperclip stance is known and unchangeable, and if resource->impact is purely linear, can it be assumed that 100% to self-preservation (oh, wait, also to accumulation of power, there's another balance to be found) is optimal.  Neither of these are true, but especially the problem of declining marginal impact.

For any given energy unit decision you could make, there will be a different distribution of future worlds and their number of paperclips.  Building one paperclip could EASILY increase the median and average number of future paperclips more than investing one paperclip's worth of power into comet diversion.

It gets more difficult when coordinating with unaligned agents - one has to decide whether to nudge them toward valuing paperclips, convincing/forcing them to give you more power, or (since they're unlikely to care as much as you about the glorious clippy future) point THEM at the comet problem so they reduce that risk AND don't interfere with your paperclips.

If you haven't played it (it was popular a few years ago in these circles, but I haven't seen it mentioned recently), it's worth a run through https://www.decisionproblem.com/paperclips/ .  It's mostly humorous, but based on some very good thinking.

Dagon30

Adversarial action makes this at least an order of magnitude worse.  If Carla has to include the chance that Bob could be lying (for attention, for humor, or just pathologically) about his experiences or his history of drug use or hallucinations, she makes an even smaller update.  This is especially difficult in group-membership or banning discussions, because LOTS of people lie (or just focus incorrectly on different evidence) for status and for irrelevant-beef reasons.  

I don't think there is a solution, other than to acknowledge that such decisions will always be a balance of false-positive and false-negative, and it's REQUIRED to ban some innocent (-ish) people in order to protect against the likely-but-not-provably-harmful.

Dagon20

"legal reasons" is pretty vague.  With billions of dollars at stake, it seems like public statements can be used against them more than it helps them, should things come down to lawsuits.  It's also the case that board members are people, and want to maintain their ability to work and have influence in future endeavors, so want to be seen as systemic cooperators.

Load More