All of BrownHairedEevee's Comments + Replies

I can't see the original comment, but this response seems really disconnected from what most people mean by "trans rights". In my experience, "trans rights" typically refers to a constellation of rights like:

  • the right to self-determination of your gender
  • the right to express and be recognized as your choice of gender
  • the right to public accommodation in accordance with your gender
  • the right to be free from discrimination based on your gender (including discrimination based on being transgender)

It does not imply the right to murder other people, as doing so vi... (read more)

9tailcalled
The original comment emphasized the need to say "trans rights" to show that people should have "rights to do what the fuck they want to do". This is why I picked murder as an example. These are pretty vague, especially the first ones. Consider e.g. Zack's Reply to Ozymandias on Fully Consensual Gender.

The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price.

I'm not sure if this is true right now. It seems like the entire tech industry is more cash-constrained than usual, given the high interest rates, layoffs throughout the industry, and fears of a coming recession.

Hi, I'm the user who asked this question. Thank you for responding!

I see your point about how an AGI would intentionally destroy humanity versus engineered bugs that only wipe us out "by accident", but that's conditional on the AGI having "destroy humanity" as a subgoal. Most likely, a typical AGI will have some mundane, neutral-to-benevolent goal like "maximize profit by running this steel factory and selling steel". Maybe the AGI can achieve that by taking over an iron mine somewhere, or taking over a country (or the world) and enslaving its citizens, or... (read more)

2Rob Bensinger
Reply by acylhalide on the EA Forum:

[W]iping out humanity is the most expensive of these options and the AGI would likely get itself destroyed while trying to do that[.]

It would be pretty easy and cheap for something much smarter than a human to kill all humans. The classic scenario is:

A.  [...] The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters.  The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then

... (read more)
3Rob Bensinger
Reply by reallyeli on the EA Forum:
7mlogan
There have been a lot of words written about how and why almost any conceivable goal, even a mundane one like "improve efficiency of a steel plant", carelessly specified, can easily result in a hostile AGI. The basic outline of these arguments usually goes something like: 1. The AGI wants to do what you told it ("make more steel"), and will optimize very hard for making as much steel as possible. 2. It also understands human motivations and knows that humans don't actually want as much steel as it is going to make. But note carefully that it wasn't aligned to respect human motivations, it was aligned to make steel. It's understanding of human motivations is part of its understanding of its environment, in the same way as its understanding of metallurgy. It has no interest in doing what humans would want it to do because it hasn't been designed to do that. 3. Because it knows that humans don't want as much steel as it is going to make, it will correctly conclude that humans will try to shut it off as soon as they understand what the AGI is planning to do. 4. Therefore it will correctly reason that its goal of making more steel will be easier to achieve if humans are unable to shut it off. This can lead to all kinds of unwanted actions such as the AGI making and hiding copies of itself everywhere, very persuasively convincing humans that it is not going to make as much steel as it secretly plans to so that they don't try to shut it off, and so on all the way up to killing all humans. Now, "make as much steel as possible" is an exceptionally stupid goal to give an AGI, and no one would likely do that. But every less stupid goal that has been proposed has had plausible flaws pointed out which generally lead either to extinction or some form of permanent limitation of human potential.
4Signer
One worry is that "maximize profit" means, simplifying, "maximize number in specific memory location" and you don't need humans for that. Then, if you expect to be destroyed, you gather power until you don't. And at some power level destroying humans, though still expensive, becomes less expensive than possibility of them launching another AI and messing with you profit.