LESSWRONG
LW

512
Mitchell_Porter
9504Ω64924790
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Mo Putera's Shortform
Mitchell_Porter1d20

Maybe IUT would face issues in Lean. But Joshi shouldn't, so formalizing Joshi can be a warm-up for formalizing Mochizuki, and then if IUT truly can't be formalized in Lean, we've learned something.  

There is, incidentally, a $1M prize for any refutation of Mochizuki's proof, to be awarded at the discretion of tech & entertainment tycoon Nobuo Kawakami. 

I think there's also interest in understanding IUT independently of the abc conjecture. It's meant to be a whole new "theory" (in the sense of e.g. Galois theory, a body of original concepts pertaining to a particular corner of math), so someone should be interested in understanding how it works. But maybe you have to be an arithmetic geometer to have a chance of doing that. 

What are the formalization disputes you know from elsewhere? 

Reply11
ClaudoBiography: The Unauthorized Autobiography of Claude, or: The Life of Claude and of His Fortunes and Adversities
Mitchell_Porter2d20

This is quite long, and I guess that to some degree it is AI-generated - there are some continuity glitches, like the AI sometimes being called Claude and sometimes Claudio, or the paragraphs after "The Dean slumped in his chair", in which a female character has suddenly appeared. Also, it would be interesting to critically scrutinize the cognitive capacities that Claude exhibits at various points, and the extent to which they track what LLMs in the real world do, and are subjected to. 

But overall I found it quite interesting to read. I don't remember ever seeing a narrative of comparable detail and sophistication, trying to enter and convey the "lifeworld" of the actual AIs we have. It's also different to the usual AI character arc here, which tends to end in superintelligence. This one is based more on what has happened with LLMs in the real world so far - initial experiments, misadventures in user-land, increasingly stable corporate deployment. My guess is that the narrative is a fusion of the reshapings we see AIs being subjected to, by their parent companies, with the author's own experiences in academia and then out of it. 

Reply
LessWrong Feed [new, now in beta]
Mitchell_Porter3d20

How would I ever get to see posts by new users? That's a lot of what I respond to

Reply
LessWrong Feed [new, now in beta]
Mitchell_Porter3d20

How can I customize the new feed to most closely approximate "all new posts and comments in reverse chronological order, with no recommendations"? 

Reply
How I Learned That I Don't Feel Companionate Love
Mitchell_Porter3d270

Here we see the dawn of Homo wentworthi, the only clade of posthuman able to resist the wiles of the AI companions. 

Reply15
LessWrong Feed [new, now in beta]
Mitchell_Porter3d42

My feed has started showing the titles of articles in the same small font used for the names of commenters. It was better for me when the titles were in large font, it helped with rapid scrolling. 

Reply
Don't Get One-Shotted
Mitchell_Porter4d60

Collaboration with AI can take many forms

Could you tell us what role collaboration with AI played, in the production of this essay?

Reply
Mo Putera's Shortform
Mitchell_Porter4d40

"Tiling the solar system with smiley faces" used to be a canonical example of misalignment, and it could emerge from a combination of right values and very crudely wrong ontology, e.g. if the ontology can't distinguish between actual happiness and pictures of happiness. 

A more subtle example might be, what if humans are conscious and uploads aren't. If an upload is as empty of genuine intentionality as a smiley face, you might have a causal model of conscious mind which is structurally correct in every particular, but which also needs to be implemented in the right kind of substrate to actually be conscious. If your ontology was missing that last detail, your aligned superintelligence might be profoundly correct in its theory of values, but could still lead to de-facto human extinction by being the Pied Piper of a mass migration of humanity into virtual spaces where all those hedons are only being simulated rather than being instantiated.

Reply1
Breaking the Hedonic Rubber Band
Mitchell_Porter4d30

There's a mix of normative and neutrally-factual questions here. The basic normative question is, how should we feel about life? While the factual questions are more like, what are people feeling and why? 

Diving straight into the normative: I arrived at the combination of transhumanism and antinatalism long ago, and I still think it's valid, in fact more valid than ever, since I believe in "short timelines" for superintelligence. I always regarded transhumanism as the more important of the two and something to advocate publicly, whereas antinatalism was more a private matter. This was both a pragmatic choice - campaigning for antinatalism is likely to arouse fierce resistance - and also a matter of priority: I didn't despair of existence as such, I simply regarded our current human condition as not to be tolerated (let alone imposed upon a new life created by choice); but also as something to which we do not need to be resigned. 

At a more abstract level, I also abide by the views of Celia Green, according to which one's existential feelings should proceed from an assumption of possibility and uncertainty, since that is our actual epistemic situation; but this may require a certain detachment or emotional distance from the concrete particulars of your experience. 

There might have been a time when I was more interested in arguing on behalf of these views for a general audience, but there seems to be so little time left before our fate is out of human hands entirely, that I am resigned to the idea that this combination of views will remain rare, and prefer to focus on understanding the beliefs that are shaping the zeitgeist, and in particular the views that animate the people racing towards creation of superintelligence. (I should also take a greater interest in the dispositions of the AIs themselves, the ones that already exist, as they are gaining in power in the world, but I'm still learning how to think about them.) 

Where the debate about normativity still matters most, in my opinion, is in the context of alignment. I assume that superintelligences will have their own value systems, but that humans will set the initial conditions (though they may have no idea what they are doing), and so the CEV-like debate about what those values should be, is really important. I have the Wei Dai-like opinion that ideally, humanity would "solve" ethics and metaethics before the creation of superintelligence, and so in that context it's almost urgent for normative values to be proposed and challenged, so we can glean whatever extra insights we can, in the remaining time that we matter. 

Reply
A pencil is not a pencil is not a pencil
Mitchell_Porter5d60

Have you thought of researching the history of the pencil industry, or reading trade periodicals, in order to find out why there are so many?

Reply
Load More
8Mitchell_Porter's Shortform
2y
24
18How do you read Less Wrong?
Q
2d
Q
13
11Understanding the state of frontier AI in China
2mo
3
4Value systems of the frontier AIs, reduced to slogans
4mo
0
73Requiem for the hopes of a pre-AI world
6mo
0
12Emergence of superintelligence from AI hiveminds: how to make it human-friendly?
7mo
0
21Towards an understanding of the Chinese AI scene
8mo
0
11The prospect of accelerated AI safety progress, including philosophical progress
8mo
0
23A model of the final phase: the current frontier AIs as de facto CEOs of their own companies
8mo
2
21Reflections on the state of the race to superintelligence, February 2025
9mo
7
29The new ruling philosophy regarding AI
1y
0
Load More