LESSWRONG
LW

300
Mitchell_Porter
9305Ω64824280
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
THE WISE, THE GOOD, THE POWERFUL
Mitchell_Porter17h40

If the SI has its own interests, then extrapolating only human volition ignores a morally relevant being. 

CEV done properly includes appropriate expansion of the circle of moral concern, because that is part of human volition. 

Will you be able to continue this sequence? You have touched upon many important considerations. 

Reply
undefined's Shortform
Mitchell_Porter2d83

It's actually "reflectively endorse" that is doing even more work, I think. Think about the people who fall in love with AI, who believe they have made AI conscious by prompting it correctly, etc. Of course there are people who think that AI right now should replace humanity. 

Reply
Why am I not currently starting a religion around AI or similar topics?
Mitchell_Porter3d90

For an analysis of the AI race, this is unusually broad-context and big-picture. There's a lot to digest. 

My main thoughts for now, are that I disagree with two aspects of your historical narrative. I'm not sure they matter that much to the conclusions, but I'll state them. 

First idea that I disagree with: the idea that nuclear proliferation was largely a matter of America deciding who it would share the technology with, on the basis of religious affinity. Once World War II was over, the US didn't want to share nuclear weapons with anyone, not even Britain. The initial proliferation that occurred was much more a matter of states independently reaching the scientific and industrial level needed to develop nuclear weapons on their own, and America being unable to stop them. The first states across the threshold were disproportionately Judeo-Christian because nuclear knowledge was most advanced in that civilization, but it didn't take many years for the rest of the world to catch up. 

At that point, the five permanent members of the UN Security Council, who had a privileged status within the world system set up after World War II, now had a common interest in curtailing proliferation, and the NNPT was engineered in a way that reflects that. Between the lines, the NNPT says that within the parts of the world that have accepted the treaty, only those five states are allowed to have nuclear weapons. Oh yes, they commit to helping everyone else with civilian nuclear programs, and working towards nuclear disarmament in the long term; but in the short term, the meaning of the NNPT was that only the big five "should" have nuclear weapons. In practice, the big five have gone on to engage in covert acts of proliferation for strategic reasons, as well as developing policies to deal with the reality of nuclear weapons states outside the treaty system, but formally the ideal has been that, in the short term, only the big five should have nukes, and in the long term, no one should. 

The idea that the US was an all-powerful gatekeeper of nuclear proliferation that chose to let it happen only within the Judeo-Christian sphere, I think is a perspective distorted by how the world looks now, compared to how it was 80 years ago. Back then, Britain and France weren't just American allies of the European region, they were at the head of international empires. They conceived themselves to be independent great powers who had made a strategic choice to work with the US... You may have run across the idea that historically, Europe has been united far less often than, say, China. The present situation of a quasi-federal Europe under American tutelage is not the norm, and did not yet exist at the dawn of the nuclear age. 

Second idea that I disagree with: the associated idea that during the Cold War, world economic development proceeded along the same lines of civilizational/religious affinity, as in your hypothesis about nuclear proliferation. 

I would roughly view the history as follows. Before the industrial revolution, you could indeed view world history in terms of competing cultures, religions, or civilizations. At that time, the whole world was poor by modern standards. Then, the industrial revolution happened within parts of Europe and created a new kind of society with a potential for an unprecedented standard of living. 

Some of those societies were thereby empowered to create worldwide empires that combined industrial capitalism at home with old-fashioned imperial resource extraction abroad. From the perspective of competition among civilizations, this looked like the worldwide triumph of Christian European civilization. However, it was really the triumph of a new kind of society and culture ("modernity") that clashed with many aspects of European tradition (see: Nietzsche and the death of God), and components of which could be mastered by other civilizations (see: Imperial Japan and every decolonization movement). 

By the time you get to the cold war and the atomic age, those European empires are already disintegrating, as you now have elites outside the West who wish to do as Japan did, and develop their own version of modernity. Part of the competition between the two new superpowers, America and Russia, was to promote their own social ideology as the key to sovereign economic development. They both tried and failed to be hegemonic within their spheres of influence (China complained of "social imperialism" from Russia, India was non-aligned), but we were not yet fully back in the world where it was an old-fashioned competition between traditional civilizations. It was a transitional period in which two secular universalisms - capitalist democracy and socialist revolution - competed to be the template of modernization.  

When Russia abandoned its ideology, there were two interpretations proposed by American political scientists. Fukuyama proposed that the liberal-democratic version of modernity would now be universal. From this perspective, the entire Cold War was a factional fight among Hegelians as to the climactic form of modernity. Hegel had a model of historical progress in the form of broadening empowerment, from absolutism, to aristocracy, and finally to a democratic republic with a concept of citizenship. The Left Hegelian, Marx, proposed one more dialectical stage in the form of communism. The Right Hegelians thought this was a mistake, and Fukuyama declared them the winners. 

In the 1990s, this vision looked plausible. Everything was about markets and the spread of democracy. Ideological dissent was limited to small "rogue states". But in the 2000s, the US found itself fighting jihadis with a very different civilizational vision; in the 2010s, world economic management required giving BRICS a seat at the table alongside the G-7; and in the 2020s, the US is governed by a movement (MAGA) that openly affiliates itself specifically with the West, rather than with the world as a whole. All this conforms to the predictions of Fukuyama's rival Huntington, who predicted that the postmodern future would see a modernized version of the "clash of civilizations" already seen in the preindustrial past. 

Summary. What I'm proposing is that your analytic framework, when it comes to nuclear proliferation and economic development during the Cold War, is the wrong one. To a great extent it does apply within the post-1990s world, but the Cold War was organized around competing secular modernities, not (e.g.) "Christianity versus Atheism". 

Reply
Accelerando as a "Slow, Reasonably Nice Takeoff" Story
Mitchell_Porter4d20

You have a point in that Vinge portrays outward migration into higher Zones, with all their unexplained advantages including computational advantage, as part of the process by which a civilization of natural intelligences evolves to the point of producing a superintelligence. (For those who haven't seen the book, the Zones are concentric regions of the galaxy, in which the further out you go, the more advanced the technology that is possible, including superintelligence and faster-than-light travel.) 

Reply
Three Paths Through Manifold
Mitchell_Porter5d20

I assume Manifold here means "reality", and not just the betting site?

Reply
Open Thread Autumn 2025
Mitchell_Porter6d30

I don't know that this would fit with the idea of no free will. Surely you're not really making any decisions.

This sounds like "epiphenomenalism" - the idea that the conscious mind has no causal power, it's just somehow along for the ride of existence, while atoms or whatever do all the work. This is a philosophy that alienates you from your own power to choose. 

But there is also "compatibilism". This is originally the idea that free will is compatible with determinism, because free will is here defined to mean, not that personal decisions have no causes at all, but that all the causes are internal to the person who decides. 

A criticism of compatibilism is that this definition isn't what's meant by free will. Maybe so. But for the present discussion, it gives us a concept of personal choice which isn't disconnected from the rest of cause and effect. 

We can consider simpler mechanical analogs. Consider any device that "makes choices", whether it's a climate control system in a building, or a computer running multiple processes. Does epiphenomenalism make sense here? Is the device irrelevant to the "choice" that happens? I'd say no: the device is the entity that performs the action. The action has a cause, but it is the state of the device itself, along with the relevant physical laws, which is the cause. 

We can think similarly of human actions where conscious choice is involved. 

But your values wouldn't have been decided by you.

Perhaps you didn't choose your original values. But a person's values can change, and if this was a matter of self-aware choice between two value systems, I'm willing to say that the person decided on their new values. 

Reply
Open Thread Autumn 2025
Mitchell_Porter6d20

AI interpretability can assign meaning to states of an AI, but what about process? Are there principled ways of concluding that an AI is thinking, deciding, trying, and so on?

Reply
abramdemski's Shortform
Mitchell_Porter7d60

It would hardly be the first time that someone powerful went mad, or was thought to be mad by those around them, and the whole affair was hushed up, or the courtiers just went along with it. Wikipedia says that the story of the emperor's new clothes goes back at least to 1335... Just last month, Zvi was posting someone's theory about why rich people go mad. I think the first time I became aware of the brewing alarm around "AI psychosis" was the case of Geoff Lewis, a billionaire VC who has neither disowned his AI-enhanced paranoia of a few months ago, nor kept going with it (instead he got married). And I think I first heard of "vibe physics" in connection with Uber founder Trevor Kalanick. 

Reply
Open Thread Autumn 2025
Mitchell_Porter8d60

The consequences for an individual depend on the details. For example, if you still understand yourself as being part of the causal chain of events, because you make decisions that determine your actions - it's just that your decisions are in turn determined by psychological factors like personality, experience, and intelligence - your sense of agency may remain entirely unaffected. The belief could even impact your decision-making positively, e.g. via a series of thoughts like "my decisions will be determined by my values" - "what do my values actually imply I should do in this situation" - followed by enhanced attention to reasoning about the decision. 

On the other hand, one hears that loss of belief in free will can be accompanied by loss of agency or loss of morality, so, the consequences really depend on the psychological details. In general, I think an anti-free-will position that alienates you from the supposed causal machinery of your decision-making, rather than one that identifies you with it, has the potential to diminish a person.  

Reply
Sora and The Big Bright Screen Slop Machine
Mitchell_Porter10d20

I have three paradigms for how something like this might "work" or at least be popular:

  1. Filters as used in smartphone photos and videos. Here the power to modify the image takes place strictly as an addendum to the context of real human-to-human communication. The Sora 2 app seems a bit like an attempt to apply this model to the much more powerful capabilities of generative video.
  2. The Sora 1 feed. This is just a feed of images and videos created by users, that other users can vote on. The extra twist is that you can usually see the prompt, storyboard, and source material used to generate them, so you can take that material and create your own variations... This paradigm is that of a genuine community of creators - people who were using Sora anyway, and are now able to study and appropriate each other's creations. One difference between this paradigm and the "filter" paradigm, is that the characters appearing in the creations are not the users, they are basically famous or fictional people.  
  3. Virtual reality / shared gaming worlds. It seems to me that something like this is favorable, if you intend to maximize creative/generative power available to the user, and you still want people to be communicating with each other, rather than inhabiting solipsistic worlds. You need some common frame so that all the morphing, opening of rabbit holes to new spaces, etc, doesn't tear the shared virtuality apart, geographically and culturally. You probably also need some kind of rules on who can create and puppet specific personas, so that you can't have just anyone wearing your face (whether that's your natural face, or one that you designed for your own use). 
Reply
Load More
8Mitchell_Porter's Shortform
2y
24
11Understanding the state of frontier AI in China
19d
3
4Value systems of the frontier AIs, reduced to slogans
3mo
0
73Requiem for the hopes of a pre-AI world
5mo
0
12Emergence of superintelligence from AI hiveminds: how to make it human-friendly?
6mo
0
21Towards an understanding of the Chinese AI scene
7mo
0
11The prospect of accelerated AI safety progress, including philosophical progress
7mo
0
23A model of the final phase: the current frontier AIs as de facto CEOs of their own companies
7mo
2
21Reflections on the state of the race to superintelligence, February 2025
8mo
7
29The new ruling philosophy regarding AI
1y
0
20First and Last Questions for GPT-5*
Q
2y
Q
5
Load More