Matrice Jacobine

Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment

Twitter account: @MLaGrangienne

Tumblr account: @supernulperfection

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn't be a global catastrophic risk and value drift risk/s-risk.

Wiping out 99% of the world population is a global catastrophic risk, and likely a value drift risk and s-risk.

Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about "short-term" issues to take the "long-term" risks seriously. Essentially, one can think of two major "short-term" AI risk scenarios (or, at least "medium-term" ones that "short-term"ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:

  1. Mass technological unemployment causing large swathes of workers to become superfluous and then starved out by the now AI-enabled corporations (what you worry about in this post)
  2. AI increasingly replacing "fallible" human decision-makers in corporations if not in government, pushed by the necessity to maximize profits to be unfettered by any moral and legal norm (even more so than human executives are already incentivized to be; what Scott worries about here)

But if 1 and 2 happens at the same time, you've got your more traditional scenario: AI taking over the world and killing all humans as they have become superfluous. This doesn't provide a full-blown case for the more Orthodox AI-go-FOOM scenario (you would need ), but at least serve as a case that one should believe Reform AI Alignment is a pressing issue, and those who are convinced about that will ultimately be more likely to take the AI-go-FOOM scenario seriously, or at least operationalize one's differences with believers in only object-level disagreements about intelligence explosion macroeconomics, how powerful is intelligence as a "cognitive superpower", etc. as opposed to the tribalized meta-level disagreements that define the current "AI ethics" v. "AI alignment" discourse.

(Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)

This looks increasingly unlikely to me. It seems to me (from an outsider's perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.

At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land). 

Well, we're not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.