...Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave t
(Defining Tool AI as a program that would evaluate the answer to a question given available data without seeking to obtain any new data, and then shut down after having discovered the answer) While those arguments (if successful) argue that it's harder to program a Tool AI than it might look at first, so AI alignment research is still something that should be actively researched (and I doubt Tegmark think AI alignment research is useless), they don't really address the point that making aligned Tool AIs are still in some sense "inherently safer" than makin...
The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value...
Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about "short-term" issues to take the "long-term" risks seriously. Essentially, one can think of two major "short-term" AI risk scenarios (or, at least "medium-term" ones that "short-term"ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:
(Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)
This looks increasingly unlikely to me. It seems to me (from an outsider's perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
Well, we're not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.
My impression is that (without even delving into any meta-level IR theory debates) Democrats are more hawkish on Russia while Republicans are more hawkish on China. So while obviously neither parties are kum-ba-yah and both ultimately represent US interests, it still makes sense to expect each party to be less receptive to the idea of ending any potential arms race against the country they consider an existential threat to US interests if left unchecked, so the party that is more hawkish on a primarily military superpower would be worse on nuclear x-risk, ... (read more)