Mitchell_Porter

Wiki Contributions

Comments

Sorted by
Answer by Mitchell_Porter73

I have seen a poll asking "when will indefinite lifespans be possible?", and Eric Drexler answered "1967", because that was when cryonic suspension first became available. 

Similarly, I think we've had AGI at least since 2022, because even then, ChatGPT was an intelligence, and it was general, and it was artificial. 

(To deny that the AIs we have now have general intelligence, I think one would have to deny that most humans have general intelligence, too.)

So that's my main reason for very short timelines. We already crossed the crucial AGI threshold through the stupid serendipity of scaling up autocomplete, and now it's just a matter of refining the method, and attaching a few extra specialized modules. 

What's the difference between "panology" and "science"?

By the start of April half the world was locked down, and Covid was the dominant factor in human affairs for the next two years or so. Do you think that issues pertaining to AI agents are going to be dominating human affairs so soon and so totally? 

Hi - I would like you to explain, in rather more detail, how this entity works. It's "Claude", but presumably you have set it up in some way so that it has a persistent identity and self-knowledge beyond just being Claude? 

If I understand correctly, you're trying to figure out what Xi would do with the unlimited power offered by an intent-aligned ASI, or how he would react to the prospect of such, etc. 

Xi's character might matter, but I am impressed by the claim here that a competent Chinese ruler will be guided first by notions of good statecraft, with any details of personality or private life to be kept apart from their decisions and public actions. 

I'm sure that Chinese political history also offers many examples of big personalities and passionate leaders, but that would be more relevant to times when the political order is radically in flux, or needs to be rebuilt from nothing. Xi came to power within a stable system. 

So you might want to also ask how the Chinese system and ideology would respond to the idea of superintelligent AI - that is, if they are even capable of dealing with the concept! There must be considerable probability that the system would simply tune out such threatening ideas, in favor of tamer notions of AI - we already see this filtering at work in the West. 

I suppose one possibility is that they would view AI, properly employed, as a way to realize the communist ideal for real. Communist countries always say that communism is a distant goal, for now we're building socialism, and even this socialism looks a lot like capitalism these days. And one may say that the powerbrokers in such societies have long since specialized in wielding power under conditions of one-party capitalism and mercantile competition, rather than the early ideal of revolutionary leveling for the whole world. Nonetheless, the old ideal is there, just as the religious ideals still exert a cultural influence in nominally secular societies descended from a religious civilization. 

When I think about Chinese ASI, the other thing I think about, is their online fantasy novels, because that's the place in Chinese culture where they deal with scenarios like a race to achieve power over the universe. They may be about competition to acquire the magical legacy of a vanished race of immortals, rather than competition to devise the perfect problem-solving algorithm, but this is where you can find a Chinese literature that explores the politics and psychology of such a competition, all the way down to the interaction between the private and public lives of the protagonists. 

Alexander Dugin speaks of "trumpo-futurism" and "dark accelerationism"

Dugin is a kind of Zizek of Russian multipolar geopolitical thought. He's always been good at quickly grasping new political situations and giving them his own philosophical sheen. In the past he has spoken apocalyptically of AI and transhumanism, considering them to be part of the threat to worldwide tradition coming from western liberalism. I can't see him engaging in wishful thinking like "humans and AIs coexist as equals" or "AIs migrate to outer space leaving the Earth for humans", so I will be interested to see what he says going forward. I greatly regret that his daughter (Daria Dugina) was assassinated, because she was taking a serious interest in the computer age's ontology of personhood, but from a Neoplatonist perspective; who knows what she might have come up with. 

Started promisingly, but like everyone else, I don't believe in the ten-year gap from AGI to ASI. If anything, we got a kind of AGI in 2022 (with ChatGPT), and we'll get ASI by 2027, from something like your "cohort of Shannon instances". 

For my part, I have been wondering this week, what a constructive reply to this would be. 

I think your proposed imperatives and experiments are quite good. I hope that they are noticed and thought about. I don't think they are sufficient for correctly aligning a superintelligence, but they can be part of the process that gets us there. 

That's probably the most important thing for me to say. Anything else is just a disagreement about the nature of the world as it is now, and isn't as important. 

Your desire to do good and your specific proposals are valuable. But you seem to be a bit naive about power, human nature, and the difficulty of doing good even if you have power

For example, you talk about freeing people under oppressive regimes. But every extant political system and major ideology, has some corresponding notion of the greater good, and what you are calling oppressive is supposed to protect that greater good, or to protect the system against encroaching rival systems with different values. 

You mention China as oppressive and say Chinese citizens "can do [nothing] to cause meaningful improvement from my perspective". So what is it when Chinese bring sanitation or electricity to a village, or when someone in the big cities invents a new technology or launches a new service? That's Chinese people making life better for Chinese. Evidently your focus is on the one-party politics and the vulnerability of the individual to the all-seeing state. But even those have their rationales. The Leninist political system is meant to keep power in the hands of the representatives of the peasants and the workers. And the all-seeing state is just doing what you want your aligned superintelligence to do - using every means it has, to bring about the better world. 

Similar defenses can be made of every western ideology, whether conservative or liberal, progressive or libertarian or reactionary. They all have a concept of the greater good, and they all sacrifice something for the sake of it. In every case, such an ideology may also empower individuals, or specific cliques and classes, to pursue their self-interest under the cover of the ideology. But all the world's big regimes have some kind of democratic morality, as well as a persistent power elite. 

Regarding a focus on suffering - the easiest way to abolish suffering is to abolish life. All the difficulties arise when you want everyone to have life, and freedom too, but without suffering. Your principles aren't blind to this, e.g. number 3 ("spread empathy") might be considered a way to preserve freedom while reducing the possibility of cruelty. But consider number 4, "respect diversity". This can clash with your moral urgency. Give people freedom, and they may focus on their personal flourishing, rather than the suffering or oppressed somewhere else. Do you leave them to do their thing, so that the part of life's diversity which they embody can flourish, or do you lean on them to take part in some larger movement? 

I note that @daijin has already provided a different set of values which are rivals to your own. Perhaps someone could write the story of a transhuman world in which all the old politics has been abolished, and instead there's a cold war between blocs that have embraced these two value systems! 

The flip side of these complaints of mine, is that it's also not a foregone conclusion that if some group manages to create superintelligence and actually knows what they're doing - i.e. they can choose its values with confidence that those values will be maintained - that we'll just have perpetual oppression worse than death. As I have argued, every serious political ideology has some notion of the greater good, that is part of the ruling elite's culture. That elite may contain a mix of cynics, the morally exhausted and self-interested, the genuinely depraved, and those born to power, but it will also contain people who are fighting for an ideal, and new arrivals with bold ideas and a desire for change; and also those who genuinely see themselves as lovers of their country or their people or humanity, but who also have an enormously high opinion of themselves. The dream of the last kind of person is not some grim hellscape, it's a utopia of genuine happiness where they are also worshipped as transhumanity's greatest benefactor. 

Another aspect of what I'm saying, is that you feel this pessimistic about the world, because you are alienated from all the factions who actually wield power. If you were part of one of those elite clubs that actually has a chance of winning the race to create superintelligence, you might have a more benign view of the prospect that they end up wielding supreme power. 

Load More