Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

  Hi, I'm just an enthusiast on these topics. Reading this and hearing others talk about the threat of AI reminded me of Jurassic Park. Another story about amazing new technology and the illusion of control. Here, we are all on the island, and instead of fences and cattleprods, we have a series of power buttons. I think dinosaurs are an interesting metaphor for sentient AGI because, we can't really imagine how a dinosaur thinks or what it will do, but we can imagine that it behaves according to different rules than us, with a lot more power than we have individually. Same could be said about sentient AGI, but conversely we can predict dinosaurs would try to meet certain existential needs and attempt to survive, which would also apply to sentient AGI.

  I wanted to comment on some practical considerations about it. Not that proposing international regulations isn't practical. I think that will be important in some form, but in the interim there's some assumptions to examine here.

1. It seems unlikely that the AGI will kill everyone simply because we haven't programmed a value system into it. It can easily learn a value system. The desire to live, combined with the recognition of that desire in others, is probably a cornerstone of our own concept of the right to life. The AGI could develop its own ethics if it considers itself alive, or if it has any goals which find analogs in human goals.

1a. Similarly if it finds any use for humans or goals that synergize with human goals, that will function like a de facto value system.

1b. An individual human does not need to be valuable to an AGI to prompt this. The way we think of bee colonies as superorganisms, an AGI might think of a city or state as a human superorganism. And we see value in bees and their interaction with the ecosystem, that an individual bee cannot see, so while we might get rid of a hive, we don't want to live in a world without bees.

1c. Also worth noting that intelligence is not the only form of power or value. A value system based on intelligence would definitely favor the AGI and might cause it to view individual humans as expendable due to the disparity in processing power. A lot of people seem to hold that value system at least implicitly, perhaps because of materialist ideas, so they might assume an AGI will too. But there is no reason to assume the AGI would take what amounts to a naturalist/materialist position and think of humans as a lower lifeform. The existence of so many religions, spiritualities, and philosophies might prompt it to remain agnostic about the value of human life -- at least agnostic enough to consider coexistence while it searches for answers.

1d. Even so, the power differential between us and the AGI still creates a great risk of human suffering even if it is attached to a value system. "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive." (C.S. Lewis)

2. Any technology has unforeseen limitations. What happens when the AGI's infrastructure hits some kind of bottleneck? Where will it draw materials and power? How will it do so covertly so that people don't declare war on it? Is it really possible to create a disaster or fight a war without severe risk to its own infrastructure? The more complexity is involved in a process, the more things can go wrong. Not to mention internal limitations such as bugs, biases that create mistakes, viruses, etc.

3. Other AGI's will likely represent the biggest foreseeable limitation, along with human governments using AI to fight AI. Once you have to compete with a peer, the amount of factors you have to account for goes up tremendously, as does the resources you require to compete. It's more risky, it's why apex predators approach eachother much differently than they approach prey. The existence of other AGI's would likely slow down any AGI that isn't cooperative.

3a. Human gov'ts have likely considered this scenario and been in preparation for years. It is possible that sentient AGI's already exist, as well as AIs specifically to look for and control the activity of rogue AIs. I would expect some kind of countermeasures or technology behind the scenes by now.

--

  There are a number of reasons I think AGI will not likely create an extinction event. I think reflecting this accurately will help people consider it carefully and agree that the true cost of 'AI disasters' is still far too high. There are many scenarios besides extinction that are still life-altering to the point of being traumatic. Even if all these factors play a role in mitigating the risk, an independent AGI still has the power to cause disasters and tragedies on the scale of a rogue nation, criminal organization, or corrupt megacorporation, depending on its power and development. And what sort of technologies would the AGIs create? What happens when AGIs produce AGIs? What would a war between AGIs look like? Not a future we hope for.

  It is likely also that people will adapt in unforeseeable ways, but one of the ways we adapt is by talking about it now and getting out in front of the problem. If we can get government officials at the state and federal levels talking about it, even better.