ahartell

Comments

Sorted by

Thank you for this post and your work on the Jones Act!

Sorry this is my only further comment, but below are you conflating the reduction of cost for a barrel of oil vs a gallon of gas? There are 42 gallons in a barrel of crude oil, which google tells me corresponds to around 20 gallons of gas once refined.

Eliminating the Jones Act would have reduced average East Coast gasoline, jet fuel, and diesel prices by $0.63, $0.80, and $0.82 per barrel, respectively, during 2018–2019, with the largest price decreases occurring in the Lower Atlantic. The Gulf Coast gasoline price would increase by $0.30 per barrel. U.S. consumers’ surplus would increase by $769 million per year, and producers’ surplus would decrease by $367 million per year.

As in, the price of gasoline on the east coast would be $0.63 lower without the Jones Act. That’s insane. Can you imagine what voters would do if they realized they were paying that much extra for gas?

Just noting the risk that the AIs could learn verifiable cooperation/coordination rather than kindness. This would probably be incentivized by the training ("you don't profit from being nice to a cooperate-rock"), and could easily cut humans out of the trades that AI make with one another.

Does this create a shortform feed?

It does! Alright I'll leave this here because I think it will make me more likely to start using this feature.

I've enjoyed listening to Nonlinear Library for recent posts.

Something that would really improve my experience would be for links to the original to be included in the description section of each "episode".

Indeed, before dismissing it entirely, one would presumably want an account of why it features so prominently in our mental and social lives.

One aspect to this seems to be that clinging is a mechanism by which a portion of the network maintains its own activation.  Given evolutionary dynamics, it's unsurprising to see widespread greediness and self-recommendation among neurons/neural structures (cf Neurons Gone Wild).

[These don't seem like cruxes to me, but are places where our models differ.]

[...]

a crux for some belief B is another belief C which if one changed one's mind about C, one would change one's mind about B.

[...]

A double crux is a particular case where two people disagree over B and have the same crux, albeit going in opposite directions. Say if Xenia believes B (because she believes C) and Yevgeny disbelieves B (because he does not believe C), then if Xenia stopped believing C, she would stop believing B (and thus agree with Yevgeny) and vice-versa.

[...]

Across most reasonable people on most recondite topics, 'cruxes' are rare, and 'double cruxes' (roughly) exponentially rarer.

It seems like your model might be missing a class of double cruxes:

It doesn't have to be the case that, if my interlocutor and I drew up belief maps, we would both find a load-bearing belief C about which we disagree. Rather, it's often the case that my interlocutor has some 'crucial' argument or belief which isn't on my radar at all, but would indeed change my mind about B if I were convinced it were true. In another framing, I have an implicit crux for most beliefs that there is no extremely strong argument/evidence to the contrary, which can match up against any load-bearing belief the other person has. In this light, it seems to me that one should not be very surprised to find double cruxes pretty regularly.

Further, even when you have a belief map where the main belief rests on many small pieces of evidence, it is usually possible to move up a level of abstraction and summarize all of that evidence in a higher-level claim, which can serve as a crux. This does not address your point about relatively unimportant shifts around 49%/51%, but in practice it seems like a meaningful point.

[Note: This comment seems pretty pedantic in retrospect. Posting anyway to gauge reception, and because I'd still prefer clarity.]

On honest businesses, I'd expect successful ones to involve overconfidence on average because of winner's curse.

I'm having trouble understanding this application of winner's curse.

Are you saying something like the following:

  1. People put in more resources and generally try harder when they estimate a higher chance of success. (Analogous to people bidding more in an auction when they estimate a higher value.)

  2. These actions increase the chance of success, so overconfident people are overrepresented among successes.

  3. This overrepresentation holds even if the "true chance of success" is the main factor. Overconfidence of founders just needs to shift the distribution of successes a bit, for "successful ones to involve overconfidence on average".

First, this seems weird to me because I got the impression that you were arguing against overconfidence being useful.

Second, are you implying that successful businesses have on average "overpaid" for their successes in effort/resources? That is central to my understanding of winner's curse, but maybe not yours.

Sorry if I'm totally missing your point.

Load More