I am referring to the very specific case where humans, as known to us, no more actively shape the environment surrounding them, due to an AGI take over.

Whatever entity comes after, will it start to explore the galaxy? Will it pursue new or old fundamental questions?

I was wondering if there were some references with first principle arguments that discuss these topics.

Example: imagine our genes using us to wonder around the universe through a better species (for galactic inoculation). Then we would share something with an AGI, no matter how different it is from us.[1] From this, we might start reducing the number of possible futures. 

Understanding the above question would allow me to get some insights into the kind of actions an AGI would potentially take towards humans (to me a less scary scenario of the near future profit-driven technological wild-west interfering with our freedom).

  1. ^

    I also assume even an AGI can not get rid off the genes, at least not easily. Not sure how to justify this though.

New Answer
New Comment

1 Answers sorted by

Viliam

20

Keywords: convergent instrumental values

Some things seem to be useful to achieve a large, diverse set of goals. For example, if we talk about humans, one person may like music but hate traveling, another person may love traveling but hate music. Both of them would benefit from having more money; the former could spend the money on CDs and concerns, the latter on travel. We say that money is instrumentally useful -- even if you do not care about money per se (most people probably don't), you care about some other things you can buy for money, and therefore you would prefer to have more rather than less money.

So it makes sense to assume that an artificial intelligence, no matter how alien its goals might be (whether something simple such as "make more paperclips" or something too complex to be understood by a human mind) would care about:

  • acquiring more resources (matter, energy, space)
  • being more efficient (at acquiring and spending these resources)
  • protecting itself from all kinds of dangers
  • having better information and more intelligence

...because all of these indirectly translate into having more (or having more reliably) whatever it is that the AI truly cares about.

On the other hand, there is no reason to assume that the AI would enjoy music, or the beauty of flowers. Those seem to be human things, and not even all humans enjoy that.

Therefore:

Whatever entity comes after, will it start to explore the galaxy?

Probably yes, because it means having more resources. Note that "exploring" doesn't necessarily mean doing something that a human space tourist would do. It could simply mean destroying everything that is out there and taking the energy of the stars.

Will it pursue new or old fundamental questions?

Yes, for the ones that seem relevant to its goals, or its efficiency, or its survival.

Probably no, for the ones that are interesting for us for specifically human reasons.