LESSWRONG
LW

2095
Mitchell_Porter
9278Ω64824190
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
8Mitchell_Porter's Shortform
2y
24
Sora and The Big Bright Screen Slop Machine
Mitchell_Porter2d20

I have three paradigms for how something like this might "work" or at least be popular:

  1. Filters as used in smartphone photos and videos. Here the power to modify the image takes place strictly as an addendum to the context of real human-to-human communication. The Sora 2 app seems a bit like an attempt to apply this model to the much more powerful capabilities of generative video.
  2. The Sora 1 feed. This is just a feed of images and videos created by users, that other users can vote on. The extra twist is that you can usually see the prompt, storyboard, and source material used to generate them, so you can take that material and create your own variations... This paradigm is that of a genuine community of creators - people who were using Sora anyway, and are now able to study and appropriate each other's creations. One difference between this paradigm and the "filter" paradigm, is that the characters appearing in the creations are not the users, they are basically famous or fictional people.  
  3. Virtual reality / shared gaming worlds. It seems to me that something like this is favorable, if you intend to maximize creative/generative power available to the user, and you still want people to be communicating with each other, rather than inhabiting solipsistic worlds. You need some common frame so that all the morphing, opening of rabbit holes to new spaces, etc, doesn't tear the shared virtuality apart, geographically and culturally. You probably also need some kind of rules on who can create and puppet specific personas, so that you can't have just anyone wearing your face (whether that's your natural face, or one that you designed for your own use). 
Reply
Pavrati Jain's Shortform
Mitchell_Porter2d20

They say Kimi K2 is good at writing fiction (Chinese web novels, originally). I wonder if it is specifically good at plot, or narrative causality? And if Eliezer and his crew had serious backing from billionaires, with the correspondingly enhanced ability to develop big plans and carry them out, I wonder if they really would do something like this on the side, in addition to the increasingly political work of stopping frontier AI? 

Reply
Matthias Dellago's Shortform
Mitchell_Porter2d30

In physics, it is sometimes asked why there should be just three (large) space dimensions. No one really knows, but there are various mathematical properties unique to three or four dimensions, to which appeal is sometimes made. 

I would also consider the recent (last few decades) interest in the emergence of spatial dimensions from entanglement. It may be that your question can be answered by considering these two things together. 

Reply11
Christian homeschoolers in the year 3000
Mitchell_Porter3d30

not the worst outcome

Are you imagining a basically transhumanist future where people have radical longevity and other such boons, but they happen to be trapped within a particular culture (whether that happens to be Christian homeschooling or Bay Area rationalism)? Or could this also be a world where people live lives with a brevity and hazardousness comparable to historic human experience, and in which, in addition, their culture has an unnatural stability maintained by AI working in the background? 

Reply
My Brush with Superhuman Persuasion
Mitchell_Porter4d20

It would be interesting to know the extent to which the distribution of beliefs in society is already the result of persuasion. We could then model the immediate future in similar terms, but with the persuasive "pressures" amplified by human-directed AI. 

Reply
[Question] What the discontinuity is, if not FOOM?
Mitchell_Porter5d20

One way to think about it is that progress in AI capabilities means ever bigger and nastier surprises. You find that your AIs can produce realistic but false prose in abundance, you find that they have an inner monologue capable of deciding whether to lie, you find that there are whole communities of people doing what their AIs tell them to do... And humanity has failed if this escalation results in a nasty surprise big enough that it's fatal for human civilization, that happens before we get to a transhuman world that is nonetheless safe even for mere humans (e.g. Ilya Sutskever's "plurality of humanity-loving AGIs"). 

Reply
Raemon's Shortform
Mitchell_Porter5d60

What are the groups?

Reply
Understanding the state of frontier AI in China
Mitchell_Porter10d42

Meta is not on that list of "frontier AI" companies because it hasn't kept up. As far as I know its most advanced model is Llama 4 and that's not on the same level as GPT-5, Gemini, Grok, or Claude. Not only has it been left behind by the pivot to reasoning models; Meta's special strength was supposed to be open source, but even there, Chinese models from Moonshot (Kimi K2) and DeepSeek (r2, v3) seem to be ahead. Of course Meta is now trying to get back in the game, but for now they have slipped out of contention. 

The remaining question I have concerns the true strength of Chinese AI models, with respect to each other and their American rivals. You could turn my previous paragraph into a thesis about the state of the world: it's the era of reasoning models, and at the helm are four closed-weight American models and two open-weight Chinese models. But what about Baidu's Ernie, Alibaba's Qwen, Zhipu's ChatGLM? Should they be placed in the first tier as well? 

Reply
How singleton contradicts longtermism
Mitchell_Porter10d30

You could be a longtermist and still regard a singleton as the most likely outcome. It would just mean that a human-aligned singleton is the only real chance for a human-aligned long-term future, and so you'd better make that your priority, however unlikely it may be. It's apparent that a lot of the old-school (pre-LLM) AI-safety people think this way, when they talk about the fate of Earth's future lightcone and so forth. 

However, I'm not familiar with the balance of priorities espoused by actual self-identified longtermists. Do they typically treat a singleton as just a possibility rather than an inevitability? 

Reply
The Only Option Left
Mitchell_Porter12d61

If I understand correctly, your chief proposition is that liberal rationalists who are shocked and appalled by Trump 2.0 should check out the leftists who actually predicted that Trump 2.0 would be shocking and appalling, rather than just being a new flavor of business as usual. And you hope for adversarial collaboration with a "right-of-center rationalist" who will take the other side of the argument. 

The way it's set up, you seem to want your counterpart to defend the idea that Trump 2.0 is still more business-as-usual, than a disastrous departure from norms. However, there is actually a third point-of-view, that I believe is held by many of those who voted for Trump 2.0. 

It was often said of those who voted for Trump 1.0, that they wanted a wrecking-ball - not out of nihilism, but because "desperate times call for desperate measures". For such people, America was in decline, and the American political class and the elite institutions had become a hermetic world of incompetence and impunity. 

For such people - a mix of conservatives and alienated ex-liberals, perhaps - business as usual is the last thing they want. For them, your double crux and forward predictions won't have the intended diagnostic meaning, because they want comprehensive change, and expect churn and struggle and false starts. They may have very mixed feelings towards Trump and his people, but still prefer the populist and/or nationalist agenda to anything else that's on offer. 

I don't know if anyone like that will step forward to debate you, but if they do, I'm not sure what the protocol would be. 

edit: Maybe the most interesting position would be an e/acc Trump 2.0 supporter - someone from the tech side of Trump's coalition, rather than the populist side. But such people avoid Less Wrong, I think. 

Reply
Load More
11Understanding the state of frontier AI in China
10d
3
4Value systems of the frontier AIs, reduced to slogans
3mo
0
73Requiem for the hopes of a pre-AI world
4mo
0
12Emergence of superintelligence from AI hiveminds: how to make it human-friendly?
5mo
0
21Towards an understanding of the Chinese AI scene
6mo
0
11The prospect of accelerated AI safety progress, including philosophical progress
7mo
0
23A model of the final phase: the current frontier AIs as de facto CEOs of their own companies
7mo
2
21Reflections on the state of the race to superintelligence, February 2025
7mo
7
29The new ruling philosophy regarding AI
11mo
0
20First and Last Questions for GPT-5*
Q
2y
Q
5
Load More