BrownHairedEevee

Wiki Contributions

Comments

Sorted by

I can't see the original comment, but this response seems really disconnected from what most people mean by "trans rights". In my experience, "trans rights" typically refers to a constellation of rights like:

  • the right to self-determination of your gender
  • the right to express and be recognized as your choice of gender
  • the right to public accommodation in accordance with your gender
  • the right to be free from discrimination based on your gender (including discrimination based on being transgender)

It does not imply the right to murder other people, as doing so violates their rights to life and bodily autonomy. The harm principle limits how people can exercise their rights.

The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price.

I'm not sure if this is true right now. It seems like the entire tech industry is more cash-constrained than usual, given the high interest rates, layoffs throughout the industry, and fears of a coming recession.

Hi, I'm the user who asked this question. Thank you for responding!

I see your point about how an AGI would intentionally destroy humanity versus engineered bugs that only wipe us out "by accident", but that's conditional on the AGI having "destroy humanity" as a subgoal. Most likely, a typical AGI will have some mundane, neutral-to-benevolent goal like "maximize profit by running this steel factory and selling steel". Maybe the AGI can achieve that by taking over an iron mine somewhere, or taking over a country (or the world) and enslaving its citizens, or even wiping out humanity. In general, my guess is that the AGI will try to do the least costly/risky thing needed to achieve its goal (maximizing profit), and (setting aside that if all of humanity were extinct, the AGI would have no one to sell steel to) wiping out humanity is the most expensive of these options and the AGI would likely get itself destroyed while trying to do that. So I think that "enslave a large portion of humanity and export cheap steel at a hefty profit" is a subgoal that this AGI would likely have, but destroying humanity is not.

It depends on the use case - a misaligned AGI in charge of the U.S. Armed Forces could end up starting a nuclear war - but given how careful the U.S. government has been about avoiding nuclear war, I think they'd insist on an AGI being very aligned with their interests before putting it in charge of something so high stakes.

Also, I suspect that some militaries (like North Korea's) might be developing bioweapons and spending 1 to 100% as much on it annually as OpenAI and DeepMind spend on AGI; we just don't know about it.

Based on your AGI-bioweapon analogy, I suspect that AGI is a greater hazard than bioweapons, but not by quite as much as your argument implies. While few well-resourced actors are interested in using bioweapons, a who's who of corporations, states, and NGOs will be interested in using AGI. And AGIs can adopt dangerous subgoals for a wide range of goals (especially resource extraction), whereas bioweapons can basically only kill large groups of people.