LESSWRONG
LW

205
Wei Dai
41585Ω2858144507118
Message
Dialogue
Subscribe

I think I need more practice talking with people in real time (about intellectual topics). (I've gotten much more used to text chat/comments, which I like because it puts less time pressure on me to think and respond quickly, but I feel like I now incur a large cost due to excessively shying away from talking to people, hence the desire for practice.) If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.

www.weidai.com

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Wei Dai's Shortform
Ω
2y
Ω
198
Obligated to Respond
Wei Dai2h20

If you get around to writing that post, please consider/address:

  • Theory of the second best - "The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal." - Generalizing from this, given that humans deviate from optimal rationality in all kinds of unavoidable ways, the "second-best" solution may well involved belief in some falsehoods.
  • Managing risks while trying to do good - We're all very tempted to overlook risks while trying to do good, including (in this instance) destroying "that which can be destroyed by truth".
Reply1
My talk on AI risks at the National Conservatism conference last week
Wei Dai3h42

Yes, on the surface all you did was to point out an overlap between Rationalists and other groups, but what I don't understand is why you chose to emphasize this particular overlap, instead of for example the overlap between us and conservatives of wanting to stop ASI from being built, or simply leaving the Rationalists out of this speech and talk about us another time when you can speak with more nuance.

My hypotheses:

  1. You just want to speak the truth as you see it, without regard to the political consequences. You had room to insert "Rationalist" into that derogatory sentence, but not room to say something longer about how rationalists and conservatives should be allies in this fight.
  2. You had other political considerations that you can't make explicit here, e.g. trying to signal honesty or loyalty to your new potential allies, or preempting a possible attack from other conservatives that you're a Rationalist who shouldn't be trusted (e.g. because we're generally against religion).

I'm leaning strongly towards 2 (as 1 seems implausible given the political nature of the occasion), but still find it quite baffling, in part because it seems like you probably could have found a better way to accomplish what you wanted, without as much of the negative consequences (i.e., alienating the community that originated much of the thinking on AI risk, and making future coalition-building between our communities more difficult).

I think I'll stop here and not pursue this line of questioning/criticism further. Perhaps you have some considerations or difficulties that are hard to talk about and for me to appreciate from afar.

Reply
My talk on AI risks at the National Conservatism conference last week
Wei Dai1d2314

“rationalists”

Thanks, I had missed this in my reading. It does seem a strange choice to include in the speech (in a negative way) if the goal is to build a broad alliance against building ASI. Many rationalist are against building ASI in our current civilizational state, including Eliezer who started the movement/community.

@geoffreymiller, can you please explain your thought process for including this word in your sentence? I'm really surprised that you seem to consider yourself a rationalist (using "we" in connection with rationalism and arguing against people who do not consider you to be a community member "in good standing"[1]) and also talk about us in an antagonistic/unfriendly way in front of others, without some overriding reason that I can see.

  1. ^

    I had upvoted a bunch of your comments in that thread, thinking that we should consider you a member in good standing.

Reply
peterbarnett's Shortform
Wei Dai2d*40

Thanks for this explanation, it definitely makes your position more understandable.

and on top of that there is the abstract idea of "good", saying you shouldn't hurt the weak at all. And that idea is not necessitated by rational negotiation. It's just a cultural artifact that we ended up with, I'm not sure how.

I can think of 2 ways:

  1. It ended up there the same way that all the "nasty stuff" ended up in our culture, more or less randomly, e.g. through the kind of "morality as status game" talked about in Will Storr's book, which I quote in Morality is Scary.
  2. It ended up there via philosophical progress, because it's actually correct in some sense.

If it's 1, then I'm not sure why extrapolation and philosophy will pick out the "good" and leave the "nasty stuff". It's not clear to me why aligning to culture would be better than aligning to individuals in that case.

If it's 2, then we don't need to align with culture either - AIs aligned with individuals can rederive the "good" with competent philosophy.

Does this make sense?

So for AIs maybe this kind of carry-over to philosophy is also the best we can hope for.

It seems clear that technical design or training choices can make a difference (but nobody is working on this). Consider the analogy with the US vs Chinese education system, where the US system seems to produce a lot more competence and/or interest in philosophy (relative to STEM) compared to the Chinese system. And comparing humans with LLMs, it sure seems like they're on track to exceeding (top) human level in STEM while being significantly less competent in philosophy.

Reply
Obligated to Respond
Wei Dai2d21

I think religion and institutions built up around it (such as freedom of religion) is a fairly clear counterexample to this. They are in part a coordination technology built upon a shared illusion (e.g., that God exists) and safeguards against its "misuse" built up from centuries of experience. If you destroy the illusion at the wrong time (i.e. before better replacements are ready), you could cause a lot of damage at least in the short run, and possibly even in the long run given path dependence.

Reply
Richard Ngo's Shortform
Wei Dai3d2110

It seems to me that Richard isn't trying to bring back ethnonationalism, or even trying to "add just that touch of ethnic pride back into the meme pool", but just trying to diagnose "how the western world got so dysfunctional". If ethnonationalism and the taboo against ethnonationalism are both bad (as an ethnic minority I'm personally pretty scared of the former), then maybe we should get rid of the taboo and defend against ethnonationalism by other means, similar how there is little to no taboo against communism[1] but it hasn't come close to taking power or reapproaching its historical high water mark in the west.

  1. ^

    If you doubt this, there's an advisor to my local school district who is a self-avowed Marxist and professor of education at the state university, and writes book reviews like this one:
    «For decades the educational Left and critical pedagogues have run away from Marxism, socialism, and communism, all too often based on faulty understandings and falling prey to the deep-seated anti-communism in the academy. In History and Education Curry Stephenson Malott pushes back against this trend by offering us deeply Marxist thinking about the circulation of capital, socialist states, the connectivity of Marxist anti-capitalism, and a politics of race and education. In the process Malott points toward the role of education in challenging us all to become abolitionists of global capitalism.» (Wayne Au, Associate Professor in the School of Educational Studies at the University of Washington Bothell; Editor of the social justice teaching magazine Rethinking Schools; Co-editor of Mapping Corporate Education Reform: Power and Policies Networks in the Neoliberal State)

Reply
Obligated to Respond
Wei Dai3d193

Some thoughts that taking this perspective triggers in me:

  1. Ask culture is actually kind of a fantastical achievement in human history, given the degree to which humans are social animals and our minds are constantly processing social consequences. Getting people to just say what they're thinking, without considering the impact of their words on other people's feelings, how is that even possible?
  2. If you consider it to be a rare and valuable achievement, a highly desirable but potentially fragile Schelling point or equilibrium (guys, if we leave level 0, we'll sink into a quagmire of infinite levels of social metacognition and never be able to easily tell what someone really has in mind!), perhaps that makes some people's behaviors more understandable, such as insisting that their words have no social consequences, or why they're so resistant to suggestions that they should consider other people's feelings before they speak. (But they're probably doing that subconsciously or by habit/indoctrination, not following this reasoning explicitly.)
  3. I'm not sure what to do in light of all this. Even talking about it abstractly like I'm doing might destroy the shared illusion that is necessary to sustain a culture where people speak their minds honestly without consideration for social consequences. (But it's probably ok here since the OP is already pushing away from it, and the door has already been opened by other posts/comments.)
  4. I'm not super-wedded to ask culture - the considerations in the OP seem real, but it also seems to be neglecting the advantages of ask culture, and not asking why it came about in the first place. It feels like a potential Chesterton's fence situation.
Reply
Richard Ngo's Shortform
Wei Dai4d*280

Can you explain more your affinity for virtue ethics, e.g., was there a golden age in history, that you read about, where a group of people ran on virtue ethics and it worked out really well? I'm trying to understand why you seem to like it a lot more than I do.

Re government debt, I think that is actually driven more by increasing demand for a "risk-free" asset, with the supply going up more or less passively (what politician is going to refuse to increase debt and spending, as long as people are willing to keep buying it at a low interest rate). And from this perspective it's not really a problem except for everyone getting used to the higher spending when some of the processes increasing the demand for government debt might only be temporary.

AI written explanation of how financialization causes increased demand for government debt

    • Financialization isn't a vague blob; it's a set of specific, concrete processes, each of which acts like a powerful vacuum cleaner sucking up government debt.

      Let's trace four of the most important mechanisms in detail.

      1. The Derivatives Market: The Collateral Multiplier

      Derivatives (options, futures, swaps) are essentially financial side-bets on the movement of an underlying asset. The total "notional" value of these bets is in the hundreds of trillions, dwarfing the real economy.

      • The Problem: If you make a bet with someone, you need to ensure they can pay you if you win. To solve this, both parties post collateral (or "margin"), which is a high-quality asset held by a third party (a clearinghouse). If someone defaults, their collateral is seized.
      • The Specific Mechanism: What is the best possible collateral? An asset that is universally trusted, easy to price, and can be sold instantly for cash. This is, by definition, a government bond. It is the gold standard of collateral.
      • How it Drives Demand: The growth of the derivatives market creates a leveraged demand for collateral. A single real-world asset (like a barrel of oil) can have dozens of derivative contracts layered on top of it. Each layer of betting requires a new layer of collateral to secure it. As the volume and complexity of financial trading grows, the demand for pristine collateral to backstop all these bets grows exponentially. This is a huge, structural source of demand that is completely detached from the need to fund real-world projects.

      Analogy: A giant, global casino. The more tables and higher-stakes games the casino runs (financialization), the more high-quality security chips (government bonds) it needs to hold in its vault to ensure all winnings can be paid out.

      2. Banking Regulation: The Regulatory Mandate for Safety

      After the 2008 financial crisis, global regulators (through frameworks like Basel III) sought to make banks safer. They did this by forcing them to hold more "safe stuff" against their risky assets.

      • The Problem: Banks make money by taking short-term deposits and making long-term, risky loans. This makes them inherently fragile.
      • The Specific Mechanism: Regulations created the concept of High-Quality Liquid Assets (HQLA). Banks are legally required to hold a certain amount of HQLA that they could sell instantly to cover their obligations in a crisis. The regulations are very specific about what counts as HQLA. The highest tier (Level 1 HQLA), which has no restrictions, is almost exclusively comprised of cash and government bonds.
      • How it Drives Demand: This creates a legally mandated, non-negotiable demand for government debt. For a bank to grow its business (i.e., make more loans), it must simultaneously purchase more government bonds to satisfy its HQLA requirements. This directly links the growth of private credit in the economy to a mandatory increase in the demand for public debt.

      Analogy: A building code for banks. The regulators say, "For every floor of risky office space you build (loans), you must add a corresponding amount of steel-reinforced concrete to the foundation (government bonds)." To build a taller skyscraper, you have no choice but to buy more concrete.

      3. The Asset Management Industry: The Rise of Liability-Driven Investing (LDI)

      The pool of professionally managed money (pensions, insurance funds, endowments) has exploded. These institutions have very specific, long-term promises to keep.

      • The Problem: A pension fund needs to be able to pay a 65-year-old a fixed income for the next 30 years. They cannot rely on the volatile stock market for this guaranteed cash flow.
      • The Specific Mechanism: This led to the strategy of Liability-Driven Investing (LDI). The goal is to own assets whose cash flows perfectly match your future liabilities. A 30-year government bond, which pays a fixed coupon every six months and repays principal in 30 years, is the perfect instrument for this. It is a contractual promise of cash flow that can be precisely matched against the contractual promise to a retiree.
      • How it Drives Demand: As the global population ages and the pool of retirement savings grows, the total value of these long-term liabilities skyrockets. This creates a massive, structural, and relatively price-insensitive demand for long-duration government bonds from the largest pools of capital in the world. They aren't buying them for speculation; they are buying them to defease their promises.

      Analogy: A pre-order system for future cash. An insurance company is like a business that has accepted millions of pre-orders for cash to be delivered in 20, 30, and 40 years. To guarantee they can fulfill those orders, they go to the most reliable supplier (the government) and place their own pre-orders for cash (by buying bonds) that will arrive on the exact same dates.

      4. The Globalization of Finance: The Search for a Universal Safe Haven

      Finance is no longer national; it is a single, interconnected global system. This system requires a neutral, trusted asset for settling international balances and storing wealth.

      • The Problem: A Chinese exporter earns dollars, or a Saudi sovereign wealth fund earns euros. Where do they store this foreign currency wealth safely and in a liquid form? They cannot hold billions in a retail bank account, and they may not want the risk of corporate stocks.
      • The Specific Mechanism: The US Treasury bond has become the de facto global reserve asset. It is the ultimate safe haven for foreign central banks, corporations, and investors. Its liquidity and the military/political backing of the US government make it the world's default savings vehicle.
    How it Drives Demand: Every time global trade grows, it creates larger trade surpluses in countries like China, Japan, and Germany. These surpluses are recycled into US Treasury bonds. Every time there is a global crisis (a European debt crisis, an emerging market collapse), capital flees from the periphery to the perceived safety of the core, which means a rush to buy US government debt. This makes the demand for Treasuries reflexive: the more unstable the world gets, the higher the demand for them becomes.

An analogy I like is with China's Land Finance (土地财政), where the government funded a large part of its spending by continuously selling urban land to real estate developers to build apartments and offices on, which was fine as long as urbanization was ongoing but is now causing problems as that process slows down (along with a bunch of other issues/complications). I think of government debt as a similarly useful resource or asset, that in part enables more complex financial products to be built on top, but may cause a problem one day if demand for it slows down.

ETA: To make my point another way, I think the modern monetary system (with a mix of various money and money-like assets serving somewhat different purposes, including fiat money, regulated bank deposits, government debt) has its own internal logic, and while distortions exist, they are inevitable under any system (only second-best solutions are possible, due to bounded rationality and principal-agent problems). If you want to criticize it I think you have to go beyond "debt that will never be repaid" (which sounds like you're trying to import intuitions for household/interpersonal finances, where it's clearly bad to never pay one's debts, to a very different situation), and talk about what specific distortions you're worried about, how the alternative is actually better (taking into account its own distortions), and/or how/why the system is causing erosion of virtue ethics.

Reply
Mikhail Samin's Shortform
Wei Dai4d182

I have heard rumor that most people who attempt suicide and fail, regret it.

After doing some research on this, I think this is unlikely to be true. The only quantitative study I found says that among its sample of suicide attempt survivors, 35.6% are glad to have survived, while 42.7% feel ambivalent, and 21.6% regret having survived. I also found a couple of sources agreeing with your "rumor", but one cited just a suicide awareness trainer as its source, while the other cited the above study as the only evidence for its claim, somehow interpreting it as "Previous research has found that more than half of suicidal attempters regret their suicidal actions." (Gemini 2.5 Pro says "It appears the authors of the 2023 paper misinterpreted or misremembered the findings of the 2005 study they cited.")

If this "rumor" was true, I would expect to see a lot of studies supporting it, because such studies are easy to do and the result would be highly useful for people trying to prevent suicides (i.e., they can use it to convince potential suicide attempters that they're likely to regret it). Evidence to the contrary are likely to be suppressed or not gathered in the first place, as almost nobody wants to encourage suicides. (The above study gathered the data incidentally, for a different purpose.) So everything seems consistent with the "rumor" being false.

Reply75
peterbarnett's Shortform
Wei Dai5d63

First, I think there’s enough overlap between different reasoning skills that we should expect a smarter than human AI to be really good at most such skills, including philosophy. So this part is ok.

Supposing this is true, how would you elicit this capability? In other words, how would you train the AI (e.g., what reward signal would you use) to tell humans when they (the humans) are making philosophical mistakes, and present humans with only true philosophical arguments/explanations? (As opposed to presenting the most convincing arguments, which may exploit flaws in human's psychology or reasoning, or telling the humans what they most want to hear or what's most likely to get a thumb up or high rating.)

Fourth—and this is the payoff—I think the only good outcome is if the first smarter than human AIs start out with “good” culture, derived from what human societies think is good.

"What human societies think is good" is filled with pretty crazy stuff, like wokeness imposing its skewed moral priorities and empirical beliefs on everyone via "cancel culture", and religions condemning "sinners" and nonbelievers to eternal torture. Morality is Scary talks about why this is generally the case, why we shouldn't expect "what human societies think is good" to actually be good.

Also, wouldn't "power corrupts" apply to humanity as a whole if we manage to solve technical alignment and not align ASI to the current "power and money"? Won't humanity be the "power and money" post-Singularity, e.g., each human or group of humans will have enough resources to create countless minds and simulations to lord over?

I'm hoping that both problems ("morality is scary" and "power corrupts") are philosophical errors that have technical solutions in AI design (i.e., AIs can be designed to help humans avoid/fix these errors), but this is highly neglected and seems unlikely to happen by default.

Reply
Load More
10Wei Dai's Shortform
Ω
2y
Ω
198
65Managing risks while trying to do good
2y
26
46AI doing philosophy = AI generating hands?
Ω
2y
Ω
23
224UDT shows that decision theory is more puzzling than ever
Ω
2y
Ω
56
163Meta Questions about Metaphilosophy
Ω
2y
Ω
80
34Why doesn't China (or didn't anyone) encourage/mandate elastomeric respirators to control COVID?
Q
3y
Q
15
55How to bet against civilizational adequacy?
Q
3y
Q
20
5AI ethics vs AI alignment
3y
1
115A broad basin of attraction around human values?
Ω
3y
Ω
18
233Morality is Scary
Ω
4y
Ω
116
Load More
Carl Shulman
2 years ago
Carl Shulman
2 years ago
(-35)
Human-AI Safety
2 years ago
Roko's Basilisk
7 years ago
(+3/-3)
Carl Shulman
8 years ago
(+2/-2)
Updateless Decision Theory
12 years ago
(+62)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+23/-12)
Updateless Decision Theory
13 years ago
(+172)
Signaling
13 years ago
(+35)
Updateless Decision Theory
14 years ago
(+22)
Load More