LESSWRONG
LW

1601
Mitchell_Porter
9287Ω64824240
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Three Paths Through Manifold
Mitchell_Porter13h20

I assume Manifold here means "reality", and not just the betting site?

Reply
Open Thread Autumn 2025
Mitchell_Porter2d30

I don't know that this would fit with the idea of no free will. Surely you're not really making any decisions.

This sounds like "epiphenomenalism" - the idea that the conscious mind has no causal power, it's just somehow along for the ride of existence, while atoms or whatever do all the work. This is a philosophy that alienates you from your own power to choose. 

But there is also "compatibilism". This is originally the idea that free will is compatible with determinism, because free will is here defined to mean, not that personal decisions have no causes at all, but that all the causes are internal to the person who decides. 

A criticism of compatibilism is that this definition isn't what's meant by free will. Maybe so. But for the present discussion, it gives us a concept of personal choice which isn't disconnected from the rest of cause and effect. 

We can consider simpler mechanical analogs. Consider any device that "makes choices", whether it's a climate control system in a building, or a computer running multiple processes. Does epiphenomenalism make sense here? Is the device irrelevant to the "choice" that happens? I'd say no: the device is the entity that performs the action. The action has a cause, but it is the state of the device itself, along with the relevant physical laws, which is the cause. 

We can think similarly of human actions where conscious choice is involved. 

But your values wouldn't have been decided by you.

Perhaps you didn't choose your original values. But a person's values can change, and if this was a matter of self-aware choice between two value systems, I'm willing to say that the person decided on their new values. 

Reply
Open Thread Autumn 2025
Mitchell_Porter2d20

AI interpretability can assign meaning to states of an AI, but what about process? Are there principled ways of concluding that an AI is thinking, deciding, trying, and so on?

Reply
abramdemski's Shortform
Mitchell_Porter2d60

It would hardly be the first time that someone powerful went mad, or was thought to be mad by those around them, and the whole affair was hushed up, or the courtiers just went along with it. Wikipedia says that the story of the emperor's new clothes goes back at least to 1335... Just last month, Zvi was posting someone's theory about why rich people go mad. I think the first time I became aware of the brewing alarm around "AI psychosis" was the case of Geoff Lewis, a billionaire VC who has neither disowned his AI-enhanced paranoia of a few months ago, nor kept going with it (instead he got married). And I think I first heard of "vibe physics" in connection with Uber founder Trevor Kalanick. 

Reply
Open Thread Autumn 2025
Mitchell_Porter4d50

The consequences for an individual depend on the details. For example, if you still understand yourself as being part of the causal chain of events, because you make decisions that determine your actions - it's just that your decisions are in turn determined by psychological factors like personality, experience, and intelligence - your sense of agency may remain entirely unaffected. The belief could even impact your decision-making positively, e.g. via a series of thoughts like "my decisions will be determined by my values" - "what do my values actually imply I should do in this situation" - followed by enhanced attention to reasoning about the decision. 

On the other hand, one hears that loss of belief in free will can be accompanied by loss of agency or loss of morality, so, the consequences really depend on the psychological details. In general, I think an anti-free-will position that alienates you from the supposed causal machinery of your decision-making, rather than one that identifies you with it, has the potential to diminish a person.  

Reply
Sora and The Big Bright Screen Slop Machine
Mitchell_Porter6d20

I have three paradigms for how something like this might "work" or at least be popular:

  1. Filters as used in smartphone photos and videos. Here the power to modify the image takes place strictly as an addendum to the context of real human-to-human communication. The Sora 2 app seems a bit like an attempt to apply this model to the much more powerful capabilities of generative video.
  2. The Sora 1 feed. This is just a feed of images and videos created by users, that other users can vote on. The extra twist is that you can usually see the prompt, storyboard, and source material used to generate them, so you can take that material and create your own variations... This paradigm is that of a genuine community of creators - people who were using Sora anyway, and are now able to study and appropriate each other's creations. One difference between this paradigm and the "filter" paradigm, is that the characters appearing in the creations are not the users, they are basically famous or fictional people.  
  3. Virtual reality / shared gaming worlds. It seems to me that something like this is favorable, if you intend to maximize creative/generative power available to the user, and you still want people to be communicating with each other, rather than inhabiting solipsistic worlds. You need some common frame so that all the morphing, opening of rabbit holes to new spaces, etc, doesn't tear the shared virtuality apart, geographically and culturally. You probably also need some kind of rules on who can create and puppet specific personas, so that you can't have just anyone wearing your face (whether that's your natural face, or one that you designed for your own use). 
Reply
Pavrati Jain's Shortform
Mitchell_Porter6d20

They say Kimi K2 is good at writing fiction (Chinese web novels, originally). I wonder if it is specifically good at plot, or narrative causality? And if Eliezer and his crew had serious backing from billionaires, with the correspondingly enhanced ability to develop big plans and carry them out, I wonder if they really would do something like this on the side, in addition to the increasingly political work of stopping frontier AI? 

Reply
Matthias Dellago's Shortform
Mitchell_Porter6d30

In physics, it is sometimes asked why there should be just three (large) space dimensions. No one really knows, but there are various mathematical properties unique to three or four dimensions, to which appeal is sometimes made. 

I would also consider the recent (last few decades) interest in the emergence of spatial dimensions from entanglement. It may be that your question can be answered by considering these two things together. 

Reply11
Christian homeschoolers in the year 3000
Mitchell_Porter7d30

not the worst outcome

Are you imagining a basically transhumanist future where people have radical longevity and other such boons, but they happen to be trapped within a particular culture (whether that happens to be Christian homeschooling or Bay Area rationalism)? Or could this also be a world where people live lives with a brevity and hazardousness comparable to historic human experience, and in which, in addition, their culture has an unnatural stability maintained by AI working in the background? 

Reply
My Brush with Superhuman Persuasion
Mitchell_Porter8d20

It would be interesting to know the extent to which the distribution of beliefs in society is already the result of persuasion. We could then model the immediate future in similar terms, but with the persuasive "pressures" amplified by human-directed AI. 

Reply
Load More
8Mitchell_Porter's Shortform
2y
24
11Understanding the state of frontier AI in China
14d
3
4Value systems of the frontier AIs, reduced to slogans
3mo
0
73Requiem for the hopes of a pre-AI world
4mo
0
12Emergence of superintelligence from AI hiveminds: how to make it human-friendly?
5mo
0
21Towards an understanding of the Chinese AI scene
7mo
0
11The prospect of accelerated AI safety progress, including philosophical progress
7mo
0
23A model of the final phase: the current frontier AIs as de facto CEOs of their own companies
7mo
2
21Reflections on the state of the race to superintelligence, February 2025
7mo
7
29The new ruling philosophy regarding AI
11mo
0
20First and Last Questions for GPT-5*
Q
2y
Q
5
Load More