Wiki Contributions

Comments

Yeah, I was thinking it's hard to beat dried salted meat, hard cheese, and oil or butter. 

You also don't have to assume that all the food travels the whole way. If (hypothetically) you want to send 1 soldier's worth of food and water 7 days away, and each person can only carry 3 days worth at a time, then you can try to have 3 days worth deposited 6 days out, and then have a porter make a 2 day round trip carrying 1 day's worth to leave for that soldier to pickup up on day 7. Then someone needs to have carried that 3 days worth to 6 days out, which you can do by having more porters make 1 day round trips from 5 days out, etc. Basically it you need exponentially more people and supplies the farther out your supply chains stretch. I think I first read about this in the context of the Incas, because potatoes are less calorie dense per pound than dried grains so it's an even bigger problem? Being able to get water along the way, and ideally to pillage the enemy's supplies, are also a very big deal.

I think at that point the limiting factors become the logistics of food, waste, water, and waste heat. In Age of Em Robin Hanson spends time talking about fractal plumbing systems and the like, for this kind of reason.

All good points, many I agree with. If nothing else, I think that humanity should pre-commit to following this strategy whenever we find ourselves in the strong position. It's the right choice ethically, and may also be protective against some potentially hostile outside forces.

However, I don't think the acausal trade case is strong enough that I would expect all sufficiently powerful civilizations to have adopted it. If I imagine two powerful civilizations with roughly identical starting points, one of which expanded while being willing to pay costs to accommodate weaker allies while the other did not and instead seized whatever they could, then it is not clear to me who wins when they meet. If I imagine a process by which a civilization becomes strong enough to travel the stars and destroy humanity, it's not clear to me that this requires it to have the kinds of minds that will deeply accept this reasoning. 

It might even be that the Fermi paradox makes the case stronger - if sapient life is rare, then the costs paid by the strong to cooperate are low, and it's easier to hold to such a strategy/ideal.

This seems to completely ignore transaction costs for forming and maintaining an alliance? Differences in the costs to create and sustain different types of alliance-members? Differences in the potential to replace some types of alliance-members with other or new types? There can be entities for whom forming an alliance that contains humanity will cause them to incur greater costs than humanity's membership can ever repay.

Also, I agree that in a wide range of contexts this strategy is great for the weak and for the only-locally-strong. But if any entity knows it is strong in a universal or cosmic sense, this would no longer apply to it. Plus everyone less strong would also know this, and anyone who truly believed they were this strong would act as though this no longer applied to them either. I feel like there's a problem here akin to the unexpected hanging paradox that I'm not sure how to resolve except by denying the validity of the argument.

On screen space:

When, if ever, should I expect actually-useful smart glasses or other tech to give me access to arbitrarily-large high-res virtual displays without needing to take up a lot of physical space, or prevent me from sitting somewhere other than a single, fixed desk?

 

On both the Three Body Problem and economic history: It really is remarkably difficult to get people to see that 1) Humans are horrible, and used to be more horrible, 2) Everything is broken, and used to be much more broken, and 3) Actual humans doing actual physical things have made everything much better on net, and in the long run "on net" is usually what matters.

On the Paul Ehrlich organization: Even if someone agrees with these ideas, do they not worry what this makes kids feel about themselves? Like, I can just see it: "But I'm the youngest of 3! My parents are horrible and I'm the worst of all!"

 

And this, like shame-based cultural norm enforcement, disproportionately punishes those who care enough to want to be pro-social and conscientious, with extra suffering.

AnthonyC22dΩ142

I agree that filling a context window with worked sudoku examples wouldn't help for solving hidouku. But, there is a common element here to the games. Both look like math, but aren't about numbers except that there's an ordered sequence. The sequence of items could just as easily be an alphabetically ordered set of words. Both are much more about geometry, or topology, or graph theory, for how a set of points is connected. I would not be surprised to learn that there is a set of tokens, containing no examples of either game, combined with a checker (like your link has) that points out when a mistake has been made, that enables solving a wide range of similar games.

I think one of the things humans do better than current LLMs is that, as we learn a new task, we vary what counts as a token and how we nest tokens. How do we chunk things? In sudoku, each box is a chunk, each row and column are a chunk, the board is a chunk, "sudoku" is a chunk, "checking an answer" is a chunk, "playing a game" is a chunk, and there are probably lots of others I'm ignoring. I don't think just prompting an LLM with the full text of "How to solve it" in its context window would get us to a solution, but at some level I do think it's possible to make explicit, in words and diagrams, what it is humans do to solve things, in a way legible to it. I think it largely resembles repeatedly telescoping in and out, to lower and higher abstractions applying different concepts and contexts, locally sanity checking ourselves, correcting locally obvious insanity, and continuing until we hit some sort of reflective consistency. Different humans have different limits on what contexts they can successfully do this in.

Oh, by "as qualitatively smart as humans" I meant "as qualitatively smart as the best human experts".

I think that is more comparable to saying "as smart as humanity." No individual human is as smart as humanity in general.

Load More