Dzoldzaya

I'm an aspiring EA / rationalist. My previous posts were intended in a slightly tongue-in-cheek way (but conveying ideas I take seriously) - future posts will endeavour to lay out my ideas more clearly/ in keeping with LW norms.

Wikitag Contributions

Comments

Sorted by

I definitely appreciate these scenarios, but it's worth looking at where things don't seem to fit, because people will often use these details to dismiss them. 

In particular, this section seems to clash with my understanding of conflict logistics and incentives. 

Data centers in China erupt with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch destruction on their home turf in awe.

Within two weeks, the United States and the PRC spend most of their stockpiles of conventional missiles

As far as I can tell, it's 1) practically unfeasible and 2) misaligned with both sides' incentives to deplete their stockpiles of short- to mid-range missiles on opposition soil.

For the PRC, the main objective would be to restore deterrence and maintain regional dominance. Some missile strikes might occur, but the focus would likely be on targeting naval forces, rather than widespread strikes on land targets. Both sides would prioritise controlling the South China Sea, focusing on air superiority, naval engagements etc. The PRC wouldn't want to spread themselves too thin, they might instead try to force the Taiwan issue through strikes on naval defences and regional infrastructure.

If the PRC continued to escalate to U.S. bases to knock-out naval and air support to the South China Sea, it's only feasible that they'd focus on bases they could reach with land-based missiles (max range 3100 miles), like Guam and Okinawa. Striking the U.S. mainland would be logistically impractical, because China doesn't have missile platforms anywhere nearby. Also, many of their longer-range missiles are dual-use (nuclear and conventional), so large-scale non-nuclear strikes would scupper their own deterrence. It's also massively escalatory to attack the mainland, and risks direct nuclear exchange (where U.S. would dominate, despite massive destruction on both sides).

Also, in terms of "depleting all their stockpiles", I don't think either side would be able to deploy their stockpiles within two weeks. The U.S. has a decent missile stockpile deployed on submarines and Pacific fleet vessels (maybe 1/3 of their total), which could be launched within a few days. But even if they wanted to instantly restock Tomahawk stockpiles to keep on blasting away at the Chinese mainland, it'd take well over two weeks to get stuff over from the Atlantic, and they wouldn't have much strategic incentive to do so.

This is partly because you get diminishing returns on missile strikes. Each additional missile has less marginal impact as key targets are destroyed or degraded and the comparative value of keeping missiles as strategic reserves for when new high-value targets emerge increases.

I wonder what you think of the super-setting weights vs. HIIT trade-off? 

I've gone full circle on this - I used to prefer HIIT, then I switched to hypertrophy-style weight training (mainly after watching "exercise science youtube, RP etc."), now I've gone back to HIIT for most workouts. A typical workout will look like a 21-15-9 progression of 5 or 6 exercises, e.g. weighted squats, pull-ups, burpees, lunges, kettlebell swings, press-ups, leg raises, box jumps, or (relatively light) deadlifts or olympic lifts for 15-25 mins. My heart rate usually stays above 140, and hits VO2 max at some point.

To me, HIIT feels way better, and more time efficient. Hypertrophy training (even with supersets, which are definitely better) still feels a bit more like a chore, and I never get a "buzz". 

I don't have good enough theory of mind to know which is best to recommend to others, though.

There's a world of difference between "let's just continue doing this research project on something obscure with no theory of impact because penicillin" and "this is more likely than not to become irrelevant in 18 months time, but if it works, it will be a game-changer". 

Robustness and generalizability are subcomponents of the expected value of your work/research. If you think that these are neglected, and that your field is too focused on the "impact" components of EV, i.e. there are too many moon shots, please clarify that, but your analogy fails to make this argument. 

As it is, I suspect that optimizing for robust generalizability is a sure-fire way of ensuring that most people become "very general helpers", which seems like a very harmful thing to promote.

Despite finishing your comment in a way that I hope we can all just try to ignore... you make an interesting point. The Pollywog example works well, if accurate. If wild animal suffering is the worst thing in the world, it follows that wild animal pleasure could easily the best thing in the world, and it might be a huge opportunity to do good in the world if we can identify species for which this is true. This seems like one of the only ways to make the world net-positive, if we do choose to maintain biological life. 

But, tragically, I think that's a difficult case to make for most animals. Omnizoid addresses it partly: "If you only live a few weeks and then die painfully, probably you won’t have enough welfare during those few weeks to make up for the extreme badness of your death. This is the situation for almost every animal who has ever lived." But I think he understates it here. 

Most vertebrates are larval fish. 99%+ of fish larvae die within days. For a larval fish, being eaten by predators (about 75%, on average) is invariably the best outcome, because dying of starvation, temperature changes, or physiological failure (the other 25%) seems a lot worse.

When they do experiments by starving baby fish to death (your reminder that ethics review boards have a very peculiar definition of ethics), they find that most sardines born in a single spawning don't even start exogenous feeding, and survive for a few days from existing energy reserves. I would speculate that much of this time is spent in a state of constant hunger stress, driven by an extremely high metabolism and increasing cortisol levels, and for the vast majority who cannot secure food, their few hours-days of existence probably look a lot more like a desperate struggle until they gradually weaken and lose energy before dying. This is partly because they were born too small to ever have a chance of exogenous feeding - like a premature human baby unable to suckle, most don't have the suction force to consume plankton.

I don't doubt that there might be some pleasure there to balance out the suffering, but it seems like a hard sell for most K-strategists.

If you'll forgive diving into the linguistics here, English is atypical in that we have a distinctive present perfect. 

In French, the present perfect - type construction (passé composé) has subsumed the past simple, which is now reserved for literate or archaic uses, so you use that for any past experience. But you can use something close to our future perfect with a similar sense: (I will have finished my degree by March/ "J'aurai terminé mon diplôme d'ici mars"). In Chinese (and other Sino languages), there's a present-perfect type construction, albeit used less consistently (e.g. have been to... quguo 去过), but it's very awkward to use future perfect-style constructions: “I just want to have read this”. I'm sure many obscure languages will have even less of a distinction.

So, if the theory in your title is correct, people from languages without a present perfect will have unruined lives, or at least lives ruined by different grammatical structures (we're looking at you, subjonctif...)! This sounds like a testable theory to me!

There's a bunch of theories around language influencing thought patterns, e.g. the idea that people who speak future-less languages save more etc.  https://www.anderson.ucla.edu/faculty/keith.chen/papers/LanguageWorkingPaper.pdf , so you could test something similar for perfect tenses. I hear that some Italian and Spanish dialects differ in whether the present-perfect and past simple are distinct. So you might have a great natural experiment there. 

My personal hobby horse here is counterfactual conditionals, usually used to express regret (mainly: "If I had done x..." or "I should have done x ...", "I wish I'd done x ..."). I used to have harmful, regretful thought patterns like this until I learned Chinese very immersively. I realised that, in Chinese-thinking mode, I had stopped using these conditionals in my train of thought, and noticed them returning when I reintegrated into an English-speaking context. It wasn't a clean experiment by any means - it could have been partly due to thinking in a non-native language, which made (over-) thinking slower, and of course, I was living in China, with obviously massive lifestyle effects. But still, I did identify these conditional thought patterns as almost definitely negative, so I'm convinced there's at least some effect. 

I haven't noticed "wanting to have done something" being less common in Chinese- or French-mode because of a more limited present perfect tense, but it would be interesting if bilinguals on LW have noticed something there.

I’m not saying to endorse prejudice. But my experience is that many types of prejudice feel more obvious. If someone has an accent that I associate with something negative, it’s usually pretty obvious to me that it’s their accent that I’m reacting to.

Of course, not everyone has the level of reflectivity to make that distinction. But if you have thoughts like “this person gives me a bad vibe but maybe that’s just my internalized prejudice and I should ignore it”, then you probably have enough metacognition to also notice if there’s any clear trait you’re prejudiced about, and whether you would feel the same way about other people with that trait.

 

It seems like the most common situation when you'd ignore bad vibes would be when a trait like this confuses your signals. When you identify a negative trait that "feels more obvious", especially if it's socially taboo to be prejudiced against (race, ethnicity/accent, LGBT-status, mental/physical disability), this can interfere with your ability to correctly interpret other evidence (including "vibes"), so that it's very easy to overcompensate the other way. 

The classic example from women's self-defence classes: you enter an enclosed space (e.g. a lift) with a man of a particular ethnicity who makes you instantly nervous. You consider not getting in, but then think "oh, this must just be his ethnicity I'm reacting to", castigate yourself for your prejudice, ignore the bad vibes, get in any way, and it turns out he was dodgy. 

Or a neuro-atypical colleague suggests a small business venture in a manner that would normally raise red flags. You get "bad vibes", but you interpret this as irrational prejudice against autistic behaviour traits, so you go along with it despite your vibes. Only later do you realise that your red flags were real, and your correction for prejudice was adding unnecessary noise into your decision-making.

I don't know whether there's evidence to back this up, but my sense is that "correction for potential prejudice" would be the major source of error here, especially among people who are more reflective.

I explained why I think tracing back personal history is impractical. 

Your separate method to spot check my model is just a simplified version of the same model.

Well, you can stick your own numbers into the model and see what you get - a few tweaks in the estimates puts farmer ancestors higher, as would assuming more prehistoric lineage collapses. 

For example, if you think that almost everyone who had offspring from 2000BC-1200AD was your ancestor, then you get more farmer ancestors. I initially put it closer to 40% (assuming little to no Sub-Saharan or Native American ancestry, and a more gradual spread throughout Eurasia), but the model is sensitive to these estimates.

From a "Eurasia-centric" perspective, my sense is that personal ancestry doesn't make a major difference except for pockets like Siberia and Iceland, perhaps. It's noticeably different for people with some New World or Sub-Saharan ancestry, and wildly different if you're pure-blooded Aboriginal Australian.
 

Sorry, just read this response. 

On the intuition question, my intuition was probably the other way because most of human history was non-farming, and because the vast majority of farmers (those born in the last millennium) weren't my ancestors. 

I updated my model to account for an error - it's now a bit closer. 7.8 billion non-farmers to 6.4 billion farmers, and 4.9 billion exclusive farmers, but I still basically stand by the logic.

To respond to your question, why I didn't pick a fixed number of personal ancestors:

We have fewer recent ancestors, assuming 16 generations, we'd have around 20k to 50k ancestors at 1600. (2^16 - inbreeding). If we want to count these ancestors carefully, we should count back with an algorithm accounting for population size and exponentially increasing inbreeding. 

We could also plausibly try to use this strategy to draw a more accurate number of ancestors from 1200-1600--- this might be a period where individual/geographical differences, or population constraints, play a significant role. If you're Icelandic, most of your ancestors in this period will still be from Iceland, but if you are Turkish, your ancestors from this period are more likely to extend from Britain to Japan. My model doesn't do this, because it sounds difficult, and because the numbers are negligible anyway- I just estimate that 0.1% to 1% of total humans born from 1200- today were my ancestors. 

By around 1200 AD, it surely becomes impractical to rely on a personal family tree to track ancestry, because of the exponential growth in the number of ancestors. Beyond that point, your total potential ancestors (in the billions, without factoring in inbreeding) massively exceed the global population (in the 100s of millions). The limited population size becomes the constraint. 

So an Italian might assume that they are descended from a significant portion (40%?) of Europe’s population in 1200 AD. By 800 AD, this would extend to a majority (60%?) of people living across Eurasia and Northern Africa. By the time we reach 500 BC to 1000 AD, it’s likely that most people from the major Old World civilizations and peripheries (where the bulk of the global population lived) were direct ancestors of people alive today. My numbers could be way off, but I think this is a better way of getting in the right ballpark than trying to trace back individual ancestry. I used these figures as a baseline. https://www.prb.org/articles/how-many-people-have-ever-lived-on-earth/

You're right that I don't account for major bottlenecks - my assumption is that they basically even out over time, and there's a constant 20-60% chance of humans born in each period not passing down ancestors to the modern day. If you wanted to refine this model you'd take into account more recent (e.g. Black Death) and less recent (Neolithic Y-chromosome bottleneck) bottlenecks.

Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing. 

But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it's more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.

Load More