All of Alexander's Comments + Replies

The lack of willpower is a heuristic which doesn’t require the brain to explicitly track & prioritize & schedule all possible tasks, by forcing it to regularly halt tasks—“like a timer that says, ‘Okay you’re done now.’”

If one could override fatigue at will, the consequences can be bad. Users of dopaminergic drugs like amphetamines often note issues with channeling the reduced fatigue into useful tasks rather than alphabetizing one’s bookcase.

In more extreme cases, if one could ignore fatigue entirely, then analogous to lack of pain, the consequenc

... (read more)

What causes us to sometimes try harder? I play chess once in a while, and I've noticed that sometimes I play half heartedly and end up losing. However, sometimes, I simply tell myself that I will try harder and end up doing really well. What's stopping me from trying hard all the time?

AlexanderΩ060

Bad question, but curious why it's called "mechanistic"?

8LawrenceC
Many forms of interpretability seek to explain how the network's outputs relate high level concepts without referencing the actual functioning of the network. Saliency maps are a classic example, as are "build an interpretable model" techniques such as LIME.  In contrast, mechanistic interpretability tries to understand the mechanisms that compose the network. To use Chris Olah's words: Or see this post by Daniel Filan.

Let me try to apply this approach to my views on economic progress.

To do that, I would look at the evidence in favour of economic progress being a moral imperative (e.g. improvements in wellbeing) and against it (development of powerful military technologies), and then make a level-headed assessment that's proportional to the evidence.

It takes a lot of effort to keep my beliefs proportional to the evidence, but no one said rationality is easy.

2Vladimir_Nesov
I don't see it. The point of the technique is to defer your own judgement on arguments/claims/facts that live inside of theories indefinitely, giving them slack to grow according to theory's own perspective (for months and years). Instead of judging individual arguments in their own right, according to your own worldview, you judge them within the theory, from the theory's weird perspective you occasionally confidently disagree with. Less urgently, you also judge the theory as a whole according to your own worldview. If it's important/interesting, or has important/interesting competitors, then you keep developing it, even if it doesn't look promising as a whole (some of its internally-generated arguments will survive its likely demise). The point that should help with motivated cognition is switching between different paradigms/ideologies that focus on theories that compete with a suspect worldview, so that a germ of truth overlooked in its misguided competitors won't be indefinitely stunted in its development. On 5-second level, the skill is to maintain a two-level context, with description of the current theory/paradigm/ideology/hypothesis at the top level, and the current topic/argument/claim at the lower level. There are two modes in which claims of the theory are judged. In the internal mode, you are channeling the theory, taking Ideological Turing Test for it, thinking in character rather than just thinking, considering the theory's claims according to the theory itself, to change them and develop the theory without affecting their growth with your own perspective and judgement. This gives the arguments courage to speak their mind, not worrying that they are disbelieved or disapproved-of by external-you. In the external mode, you consider claims made by the theory from your own perspective and treat them as predictions. When multiple theories make predictions about the same claim, and you are externally confident in its truth, that applies a likelihood rati
Alexander*110

Do you notice your beliefs changing overtime to match whatever is most self-serving? I know that some of you enlightened LessWrong folks have already overcome your biases and biological propensities, but I notice that I haven't.

Four years ago, I was a poor university student struggling to make ends meet. I didn't have a high paying job lined up at the time, and I was very uncertain about the future. My beliefs were somewhat anti-big-business and anti-economic-growth.

However, now that I have a decent job, which I'm performing well at, my views have shifted ... (read more)

6Vladimir_Nesov
The general strategy is to have fewer beliefs, to allow development of detailed ideas/hypotheses/theories without giving much attention to evaluation of their external correctness (such as presence of their instances in the real world, or them making sense in other paradigms you understand), instead focusing on their internal correctness (validity of arguments inside the idea, from the point of view of a paradigm/mindset native to the idea). Then you only suffer motivated attention, and it's easier to counter by making sure to pay some attention to developing understanding of alternatives. The results look pretty weird though, for example I imagine that there might be a vague impression from my writings that I'm changing my mind on topics that I've thought about for years back and forth on months-long scale, or believe contradictory things at the same time, or forget some fundamental well-known things (paradigms can insist on failing to understand/notice some established facts, especially when their well-being depends on it). I'm not sure how to communicate transient interest in an obscure idea without it coming across as a resurgent belief in it (with a touch of amnesia to other things), and I keep using belief-words to misleadingly describe beliefs that only hold inside an idea/hypothesis/theory too. (This is not something language has established conventions for.)

Amazing. I look forward to getting myself a copy 😄

Will the Sequences Highlights become available in print on Amazon?

6Ruby
We are definitely thinking about it. We've just made a limited 300 copies print-run we'll be giving out to some people, if that goes well and seems worth it, we might scale up to the Amazon-available version.

Have you come across the work of Yann LeCun on world models? LeCun is very interested in generality. He calls generality the "dark matter of intelligence". He thinks that to achieve a high degree of generality, the agent needs to construct world models.

Insects have highly simplified world models, and that could be part of the explanation for the high degree of generality exhibited by insects. For example, the fact the male Jewel Beetle fell in love with beer bottles mistaking them for females is strong evidence that beetles have highly simplified world mod... (read more)

I see what you mean now. I like the example of insects. They certainly do have an extremely high degree of generality despite their very low level of intelligence.

Oh, I'm not making the argument that randomly permuting the Rubik's Cube will always solve it in a finite time,  but that it might. I think it has a better chance of solving it than the chicken. The chicken might get lucky and knock the Rubik's Cube off the edge of a cliff and it might rotate by accident, but other than that, the chicken isn't going to do much work on solving it in the first place. Meanwhile, randomly permuting might solve it (or might not solve it in the worst case). I just think that random permutations have a higher chance of solving it than the chicken, but I can't formally prove that.

We can demonstrate this wth a test.

  1. Get a Rubik's Cube playing robotic arm, and ask it to flip a shuffled Rubik's Cube randomly until it's solved. How many years will it take until it's solved? It's some finite time, right? Millions of year? Billions of years?
  2. Get a chicken and give it a Rubik's Cube and ask it to solve it. I don't think it will perform better than our random robot above.

I just think that randomness is a useful benchmark for performance on accomplishing tasks.

1[anonymous]
In Bogosort, the lower bound for random version is unbounded which is O(inf). You can even turn the Rubiks Cube problem into a sorting problem of finding path from starting position to solved position, involving a series of moves. I'm not sure if there are more than one set of shortest path solution to each scenario.

I imagine the relationship differently. I imagine a relationship between how well a system can perform a task and the number of tasks the same system can accomplish.

Does a chicken have a general intelligence? A chicken can perform a wide range of tasks with low performance, and performs most tasks worse than random. For example, could a chicken solve a Rubik's Cube? I think it would perform worse than random.

Generality to me seems like an aggregation of many specialised processes working together seamlessly to achieve a wide variety of tasks. Where do huma... (read more)

2cubefox
Regarding the counterexample: I think it is fair to say that perfect orthogonality is not plausible, especially not if we allow cases with one axis being zero, whatever that might mean. But intelligence and generality could be still largely orthogonal. What do you think of the case of insects, as an example of low intelligence and high generality? (I think not even the original orthogonality thesis holds perfectly. An example is the oft-cited fact that animals don't optimize for the goal of inclusive genetic fitness because they are not intelligent enough to grasp such a goal. So they instead optimize for similar things, like sex.)
1[anonymous]
How?

When some people hear the words "economic growth" they imagine factories spewing smoke into the atmosphere:

This is a false image of what economists mean by "economic growth". Economic growth according to economists is about achieving more with less. It's about efficiency. It is about using our scarce resources more wisely.

The stoves of the 17th century had an efficiency of only 15%. Meanwhile, the induction cooktops of today achieve an efficiency of 90%. Pre-16th century kings didn't have toilets, but 54% of humans today have toilets all thanks to economic... (read more)

Hello, I have a question. I hope someone with more knowledge can help me answer it.

There is evidence suggesting that building an AGI requires plenty of computational power (at least early on) and plenty of smart engineers/scientists. The companies with the most computational power are Google, Facebook, Microsoft and Amazon. These same companies also have some of the best engineers and scientists working for them. A recent paper by Yann LeCun titled A Path Towards Autonomous Machine Intelligence suggests that these companies have a vested interest in actual... (read more)

1Signer
Don't have any relevant knowledge, but it's a tradeoff between having some influence and actually doing alignment research? It's better for persuasion to have an alignment framework, especially if only advantage you have as safety team employee is being present at the meetings where everyone discuss biases in AI systems. It would be better if it was just "Anthropic, but everyone listens to them", but changing it to be like that spends time you could spend solving alignment.

Additionally, your analogy doesn't map well to my comment. A more accurate analogy would be to say that active volcanoes are explicit and non-magical (similar to reason), while inactive volcanoes are mysterious and magical (similar to intuitions), when both phenomena have the same underlying physical architecture (rocks and pressure for volcanoes and brains for cognition), but manifest differently.

I just reckon that we are better off working on understanding how the black box actually works under the hood instead of placing arbitrary labels and drawing lines in the sand on things we don't understand, and then debating those things we don't understand with verve. Labelling some cognitive activities as reason and others as intuitions doesn't explain how either phenomenon actually works.

Thanks for your insights Vladimir. I agree that Abstract Algebra, Topology and Real Analysis don’t require much in terms of prerequisites, but I think without sufficient mathematical maturity, these subjects will be rough going. I should’ve made clear that by “Sets and Logic” I didn’t mean a full fledged course on Advanced Set Theory and Logic, but rather simple familiarity with the axiomatic method through books like Naive Set Theory by Halmos and Book of Proof by Hammack.

A map displaying the prerequisites of the areas of mathematics relevant to CS/ML:

A dashed line means this prerequisite is helpful but not a hard requirement.

1anonymousaisafety
You should add computational complexity.
3Vladimir_Nesov
It might be possible to find such ordering for specific textbooks, but this doesn't make much sense on the level of topics. It helps to know a bit of each topic to get more out of any other topic, so it's best to study most of these in parallel. That said, it's natural to treat topology as one of the entry-level topics, together with abstract algebra and analysis. And separate study of set theory and logic mostly becomes relevant only at a more advanced level.

Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself.

Excerpt from "Artificial Intelligence: A Modern Approach" by Norvig and Russell.

I would define "physical" as the set of rigid rules governing reality that exist beyond our experience and that we cannot choose to change.

I can cause water to freeze to form ice using my agency, but I cannot change the fundamental rules governing water, such as its freezing point. These rules go beyond my agency and, in fact, constrain my agency.

Physics constrains everything else in a way that everything else does not constrain physics, and thus the primacy of physics.

2TAG
If there were a rigid law that the wicked are punished karmically, would that count as physical?

if there's one superpower close to reaching the power level of everyone else combined, then everyone-else will ally to bring them down, maintaining a multipolar balance of power.

I hope they don't use nukes when they do that because that way, everyone loses.

I've read "The Elephant in the Brain", and it was certainly a breathtaking read. I should read it again.

That's an eloquent way of describing morality.

It would be so lovely had we lived in a world where any means of moneymaking helped us move uphill on the global morality landscape comprising the desires of all beings. That way, we can make money without feeling guilty about doing it. Designing a mechanism like this is probably impossible.

I wish moneymaking was by default aligned with optimising for the "good". That way, I can focus on making money without worrying too much about the messiness of morality. I wholly believe that existential risks are unequivocally the most critical issues of our time because the cost of neglecting them is so enormous, and my rational self would like to work directly on reducing them. However, I'm also profoundly programmed through millions of years of evolution to want a lovely house in the suburbs, a beautiful wife, some adorable children, lots of friends, ... (read more)

6Dagon
Note that this is a complaint about morality, not about moneymaking. Making money is aligned with satisfying some desires of some people.  But morality is all about the messiness of divergent and unaligned humans and sets of humans (and non-human entities and future/potential humans who don't have any desires yet, but will at some point).

I agree. I don't think this kind of behaviour is the worst thing in the world. I just think it is instrumentally irrational.

Alexander*520

Premise: people are fundamentally motivated by the "status" rewarded to them by those around them.

I have experienced the phenomenon of demandingness described in your post, and you've elucidated it brilliantly. I regularly frequent in-person EA events, and I can visibly see status being rewarded according to impact, which is very different from how it's typically rewarded in the broader society. (This is not necessarily a bad thing.) The status hierarchy in EA communities goes something like this:

  • People who've dedicated their careers to effective causes. O
... (read more)
4quanticle
I don't like analogizing EA to a religious movement, but I think such an analogy is appropriate in this instance. If I went to a Christian gathering, accompanying a devout friend, and someone came up to me and asked, "Oh, I haven't seen you before, which church do you attend?" I would reply, "Oh, I'm not Christian." Then if, after a bit of discussion, that person chose to turn and walk away, I wouldn't be offended. In fact, them turning and walking away is one of the better outcomes. Far better than them attempting to continue proselytize at me for the rest of the event. In this case, the organizer encountered a person who was clearly not bought into EA, ascertained that they were not bought into EA after a short discussion, and then chose to walk away. While that's not the friendliest response, it's hardly the worst thing in the world.

Loved your comment, especially the “goodharting” interjections haha.

Your comment reminded me of “building” company culture. Managers keep trying to sculpt a company culture, but in reality managers have limited control over the culture. Company culture is more a thing that happens and evolves, and you as an individual can only do so much to influence it this way or that way.

Similarly, status is a thing that just happens and evolves in human society, and sometimes it has good externalities and other times it has bad externalities.

I quite liked “What You Do ... (read more)

7Viliam
That sounds like an advice on parenting. What your kids will do, is what they see you doing. Actually no, that would be too easy -- your kids will instead do a more stupid version of what they see you doing. Like, you use a swear word once in a month, but then they will keep using it every five minutes. And, I don't know, maybe this is where the (steelman of) preaching is useful; to correct the exaggerations. Like, if you swear once in a while, you will probably fail to convince your kids that they should never do that; copying is a strong instinct. But if you tell them "okay kids, polite people shouldn't be doing this, and yes I do it sometimes, but please notice that it happens only on some days, not repeatedly, and not in front of your grandparents or teachers -- could you please follow the same rules?" then maybe at some age this would work. At least it does not involve denying the reality. And maybe the manager could say something like "we said that we value X, and yes, we had this one project that was quite anti-X, but please notice that most of our projects are actually quite X, this one was an exception, and we will try having fewer of that in the future". Possibly with a discussion on why that one specific project ended up anti-X, and what we could do to prevent that from happening in the future. Intentionally influencing other people is a skill I lack, so I can only guess here. It seems to me that although no manager actually has the whole company perfectly under control, there are still some more specific failure modes that are quite common: * Some people are obvious bullshitters, and if you are familiar with this type, you just won't take any of their words seriously. Their lips just produce random syllables that are supposed to increase your productivity, or maybe just signal to their boss that they are working hard to increase your productivity, but the actual meaning of the words is completely divorced from reality. (For example, the managers who

I recently read Will Storr's book "The Status Game" based on a LessWrong recommendation by user Wei_Dai. It's an excellent book, and I highly recommend it.

Storr asserts that we are all playing status games, including meditation gurus and cynics. Then he classifies the different kinds of status games we can play, arguing that "virtue dominance" games are the worst kinds of games, as they are the root of cancel culture.

Storr has a few recommendations for playing the Status Game to result in a positive-sum. First, view other people as being the heroes of thei... (read more)

5Paweł Sysiak
Elephant in the Brain influenced extensively ways I perceive social motivations. It is talking exactly about the same subject and mechanisms of why we don't discern it in ourselves. If you didn't read it you should check it out. It rewrote my views to the extent that I feel afraid to read "The status game" because it feels so easy to fall into confirmation bias here. This seems to me so active that I would love to read something opposite. Are there any good critiques of this view? Once I was listening to Frans de Waal's lecture when he expressed this confusion that in primatology almost everything is explained through the hierarchy in the group. But when we listen to social scientists almost none of it is. Elephant in the brain. I think this is such an important topic.
Viliam110

The important fact about "zero-sum" games is that they often have externalities. Maybe status is a zero-sum game in the sense that either you are higher-status than me, or the other way round, but there is no way for everyone to be at the top of the status ladder.

However, the choice of "weapons" in our battle matters for the environment. If people can only get higher status by writing good articles, we should expect many good articles to appear. (Or, "good" articles, because goodharting is a thing.) If people can get higher status by punching each other, w... (read more)

Hello, thank you for the post!

All images on this post are no longer available. I'm wondering if you're able to imbed them directly into the rich text :)

This post has brilliantly articulated a crucial idea. Thank you!

Microfoundations for macroeconomics is a step in the right direction towards a gears-level understanding of economics. Still, our current understanding of cognition and human nature is primarily based on externally-visible behaviour and not on gears. Do you think we are progressing in the right direction within microeconomics towards more gears-level agent models?

I read the arguments against microfoundations, and some opponents point to "feedback loops". They claim that the arrow of causation ... (read more)

This reminds me of the book "Four Thousand Weeks". The core idea is that if you become productive at doing something, then society will want you to do more of that thing. For example, if you were good at responding to email, always prompt, and never missing an email, society would send you more emails because you had built a reputation of being good at responding to email.

Excellent post, thanks, Eli. You've captured some core themes and attitudes of rationalism quite well.

I find the "post" prefix unhelpful whenever I see it used. It implies a final state of whatever it is referring to.

What meaning of "rationality" does "post-rationality" refer to? Is "post-rationality" referring to "rationality" as a cultural identity, or is it referring to "rationality" as a process of optimisation towards achieving some desirable states of the world?

There is an important distinction between someone identifying as a rationalist but acting ... (read more)

Oh, it wouldn't eliminate all selection bias, but it certainly would reduce it. I said "avoid selection bias," but I changed it to "reduce selection bias" in my original post. Thanks for pointing this out.

It's tough to extract completely unbiased quasi-experimental data from the world. A frail elder dying from a heart attack during the volcanic eruption certainly contributes to selection bias.

5TLW
Understatement. I can't go into more details here unfortunately, and some of the details are changed for various reasons, but one of the best examples I've seen of selection bias was  roughly the following: Chips occasionally failed. When chips failed, they were sent to FA[1] where they were decapped[2] and (destructively) examined[3] to see what went wrong. This is a fairly finicky process[4], and about half the chips that went through FA were destroyed in the FA process without really getting any useful info. Chips that went through FA had a fairly flat failure profile of various different issues, none of which contributed more than a few percent to the overall failure rate. Only... it turns out that ~40% of chip failures shared a single common cause. Turns out there was an issue that was causing the dies to crack[5], and said chips FA received then promptly discarded because they thought they had cracked the die during the FA process. (FA did occasionally crack the dies, but at about an order of magnitude lower rate than they had thought.) 1. ^ failure analysis 2. ^ Removed, very carefully, from their packaging. 3. ^ e.g. by lapping and using an electron microscope. 4. ^ To put it mildly. 5. ^ Amusingly, a cracked die can actually sometimes kind of limp along due to e.g. capacitive coupling. And then you bump it or the temperature changes...

A missing but essential detail: the government compensated these people and provided them with relocation services. Therefore, even the frail were able to relocate.

1TLW
That does help reduce the selection bias; I don't see why that would eliminate it? I was more focused on chance of death not financials. The chance of said little old granny dying before being settled, either on the way or at the time of the initial event, is likely higher than the chance of the 25y old healthy male.

Recently I came across this brilliant example of avoiding reducing selection bias when extracting quasi-experimental data from the world towards the beginning of the book "Good Economics for Hard Times" by Banerjee and Duflo.

The authors were interested in understanding the impact of migration on income. However, most data on migration contains plenty of selection bias. For example, people who choose to migrate are usually audacious risk-takers or have the physical strength, know-how, funds and connections to facilitate their undertaking,

To reduce these selection biases, the authors looked at people forced to relocate due to rare natural disasters, such as volcano eruptions.

2TLW
Doesn't this still have selection bias, just of a different form? For an obvious example, a little old granny who can barely walk is far less likely to (successfully) relocate due to a volcano eruption than a 25y old healthy male.

Words cannot possibly express how thankful I am for you doing this!

I bet that most of them would replicate flawlessly. Boring lab techniques and protein structure dominate the list, nothing fancy or outlandish. Interestingly, the famous papers like relativity, expansion of the universe, the discovery of DNA etc. don't rank anywhere near the top 100. There is also a math paper on fuzzy sets among the top 100. Now that's a paper that definitely replicates!

Excellent article! I agree with your thesis, and you’ve presented it very clearly.

I largely agree that we cannot outsource knowledge. For example, you cannot outsource the knowledge to play the violin, and you must invest in countless hours of deliberate practice to learn to play the violin.

A rule of thumb I like is only to delegate things that you know how to do yourself. A successful startup founder is capable of comfortably stepping into the shoes of anyone they delegate work to. Otherwise, they would have no idea what high-quality work looks like and h... (read more)

The orb-weaving spider. I updated my original post to include the name.

2Yoav Ravid
Thanks :)

Excellent write-up. Thanks, Elizabeth.

I'm a software engineer at a company that implements a "20%". Every couple of months, we have a one (sometimes two) week sprint for the 20%. As you've pointed out, it works out to be less than 20%, and many engineers choose to keep working on their primary projects to catch up on delivery dates.

In the weeks leading up to the 20% sprint, we create a collaborative table in which engineers propose ideas and pitch those ideas in a meeting on the Monday morning of the sprint. Proposals fall into two categories:

  • Reducing tech
... (read more)
8Viliam
Ironically, it seems to me that "agile" development took autonomy away from software developers, and "20%" gives it partially back.
Alexander*100

On the mating habits of the orb-weaving spider:

These spiders are a bit unusual: females have two receptacles for storing sperm, and males have two sperm-delivery devices, called palps. Ordinarily the female will only allow the male to insert one palp at a time, but sometimes a male manages to force a copulation with a juvenile female, during which he inserts both of his palps into the female’s separate sperm-storage organs. If the male succeeds, something strange happens to him: his heart spontaneously stops beating and he dies in flagrante. This may be th

... (read more)
3Yoav Ravid
That's fascinating. Which spiders are those?

I think that you raise a crucial point. I find it challenging to explain to people that AI is likely very dangerous. It‘s much easier to explain that pandemics, nuclear wars or environmental crises are dangerous. I think this is mainly due to the abstractness of AI and the concreteness of those other dangers, leading to availability bias.

The most common counterarguments I've heard from people about why AI isn't a serious risk are:

  • AI is impossible, and it is just "mechanical" and lacks some magical properties only humans have.
  • When we build AIs, we will not
... (read more)
2Legionnaire
As another short argument: We don't need an argument for why AI is dangerous, because dangerous is the default state of powerful things. There needs to be a reason AI would be safe.

This is a list of the top 100 most cited scientific papers. Reading all of them would be a fun exercise.

6Viliam
How many of them do replicate? :D
6Gunnar_Zarncke
That link didn't work for me somehow but I managed to find the spreadsheet with the 100 articles: https://www.nature.com/news/polopoly_fs/7.21247!/file/WebofSciencetop100.xlsx
0PaulinePaulson777
And I decided to take up organizing activities. I studied a lot of useful information with the help of https://create.vista.com/solutions-for/event/. But I would like to hear advice from the current organizers on how to develop in this direction. Maybe you can recommend some books.

Speculating about this is a fun exercise. I argue that the answer is less probable.

The survivors might have a more substantial commitment to life affirmation given that the fragility of life is so fresh in their minds following Armageddon. I argue that this would have a minimal effect. We know that the dinosaurs went extinct, and we know that the average lifespan for a mammalian species is about one million years. We know that we have fought world wars, and we know that life is precious and unreasonably rare in the universe. Yet, in aggregate, we still don... (read more)

Thanks for all the excellent writing on economic progress you've put out. I completed reading "Creating a Learning Society" by Joseph Stiglitz a few days ago, and I am in the process of writing a review of that book to share here on LessWrong. Your essays are providing me with a lot of insights that I hope to take into account in my review :D

Alexander*120

The theory of ‘morality as cooperation’ (MAC) argues that morality is best understood as a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. MAC draws on evolutionary game theory to argue that, because there are many types of cooperation, there will be many types of morality. These include: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. Previous research suggests that these seven types of morality are evolutionarily-ancient, psychologically-distinct,

... (read more)

Yes, it looks isomorphic. Thanks for sharing your write-up. You've captured this idea well.

I appreciate how Toby Ord considers "knock-on effects" in his modelling of existential risks, as presented in "The Precipice". A catastrophe doesn't necessarily have to cause extinction for it to be considered an existential threat. The reason being knock-on effects, which would undoubtedly impair our preparedness for what comes next.

7Rafael Harth
Is this isomorphic to the framing of existential risk as one category? (Which is not something I came up with, I just distilled the idea.) It still seems to me like the idea that first- and second-order effects are qualitatively different is just a mistake born of the "risk from xxx" framing.
Load More