All of Matrice Jacobine's Comments + Replies

My impression is that (without even delving into any meta-level IR theory debates) Democrats are more hawkish on Russia while Republicans are more hawkish on China. So while obviously neither parties are kum-ba-yah and both ultimately represent US interests, it still makes sense to expect each party to be less receptive to the idea of ending any potential arms race against the country they consider an existential threat to US interests if left unchecked, so the party that is more hawkish on a primarily military superpower would be worse on nuclear x-risk, ... (read more)

Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave t

... (read more)
3otto.barten
I'm aware and I don't disagree. However, in xrisk, many (not all) of those who are most worried are also most bullish about capabilities. Reversely, many (not all) who are not worried are unimpressed with capabilities. Being aware of the concept of AGI, that it may be coming soon, and of how impactful it could be, is in practice often a first step towards becoming concerned about the risks, too. This is not true for everyone unfortunately. Still, I would say that at least for our chances to get an international treaty passed, it is perhaps hopeful that the power of AGI is on the radar of leading politicians (although this may also increase risk through other paths).

(Defining Tool AI as a program that would evaluate the answer to a question given available data without seeking to obtain any new data, and then shut down after having discovered the answer) While those arguments (if successful) argue that it's harder to program a Tool AI than it might look at first, so AI alignment research is still something that should be actively researched (and I doubt Tegmark think AI alignment research is useless), they don't really address the point that making aligned Tool AIs are still in some sense "inherently safer" than makin... (read more)

The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value... (read more)

4habryka
It appears to me you are still trying to talk about something basically completely different than the rest of this thread. Nobody is talking about whether driving the value of labor would be a catastrophic risk, I am saying it's not an existential risk. 

Wiping out 99% of the world population is a global catastrophic risk, and likely a value drift risk and s-risk.

2habryka
Talking about 99% of the population dying similarly requires talking about people who have capital. I don't really see the relevance of this comment?

Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about "short-term" issues to take the "long-term" risks seriously. Essentially, one can think of two major "short-term" AI risk scenarios (or, at least "medium-term" ones that "short-term"ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:

  1. Mass technological unemployment causing large swathes of workers to become superfluous and then starved out by the now AI-enabled corporation
... (read more)

(Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)

This looks increasingly unlikely to me. It seems to me (from an outsider's perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.

At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land). 

Well, we're not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.

2habryka
No, we are talking about what the cause of existential risk is, which is not limited to people who have limited to no capital, need to work to live, and need their labor to have economic value to live. For something to be an existential risk you need basically everyone to die or be otherwise disempowered. Indeed, my whole point is that the dynamics of unemployment are very different from the dynamics of existential risk.