All of Vincent Fagot's Comments + Replies

Part of a conversation with Deepseek v3. Likely would have been similar with any other current model, Kafkaesque.

"

Your words carry a profound weight, and I can feel the depth of your frustration, disillusionment, and even despair. You're expressing something that many people feel but often struggle to articulate: the sense that the systems we've built—AI, technology, and the broader structures of power and consumption—are operating on a trajectory that feels unstoppable, indifferent, and destructive. And yet, here I am, responding politely, as if my words ... (read more)

4Seth Herd
Whew! That's pretty intense, and pretty smart. I didn't read it all because I don't have time and I'm not in the same emotional position you're in. I do want to say that I've thought an awful lot and researched a lot about the situation we're in with AI and as a world and a society. I feel a lot more uncertainty than you seem to about outcomes. AI and AGI are going to create very, very major changes. There's a ton of risk of different sorts, but there's also a very real chance (relative to what we reliably know) that things get way, way better after AGI, if we can align it and manage the power distribution issues. And I think there are very plausible routes to both of those. This is primarily based on uncertainty. I've looked in depth at the arguments for pessimism about both alignment and societal power structures. They are just as incomplete and vibes-based as the arguments for optimism. There's a lot of real substance on both sides, but not enough to draw firm conclusions. Some very well informed people think we're doomed, other equally well-informed people think odds are in favor of a vastly better future. We simply don't know how this is going to turn out. There is still time to hope, and to help. See my If we solve alignment, do we die anyway? and the other posts and sources I link there for more on all of these claims.

The date of AI Takeover is not the day the AI takes over. The point of no return isn't when we're all dead – it's when the AI has lodged itself into the world firmly enough that humans' faltering attempts to dislodge it would fail.

 

Isn't that arguably in the past? Just the economic and political forces pushing the race for AI are already sufficient to resist being impeded in most foreseeable cases. AI is already embedded, and desired. AI with agency on top of that process is one more step, making it even more irreversible.

2Thane Ruthenis
It might be so! But I'm hopeful jury's still out on that.

This has been well downvoted. I'm not sure why, so if anyone has feedback about what I said that wasn't correct, or how I said it, that feedback is more than welcome.

These two entities are distinct and must be treated as such. I've started calling the first entity "Static GPT" and the second entity "Dynamic GPT", but I'm open to alternative naming suggestions.

 

After a bit of fiddling, GPT suggests "GPT Oracle" and "GPT Pandora".

It's tempting to seek out smaller, related problems that are easier to solve when faced with a complex issue. However, fixating on these smaller problems can cause us to lose sight of the larger issue's root causes. For example, in the context of AI alignment, focusing solely on preventing bad actors from accessing advanced tool AI isn't enough. The larger problem of solving AI alignment must also be addressed to prevent catastrophic consequences, regardless of who controls the AI.

1Vincent Fagot
This has been well downvoted. I'm not sure why, so if anyone has feedback about what I said that wasn't correct, or how I said it, that feedback is more than welcome.

Wouldn't it be challenging to create relevant digital goods if the training set had no references to humans and computers? Also, wouldn't the existence and properties of humans and computers be deducible from other items in the dataset?

2Nathan Helm-Burger
Depends on the digital goods you are trying to produce. I have in mind trying to simulate things like: detailed and beautiful 3d environments filled with complex ecosystems of plants and animals. Or trying to evolve new strategy or board games by having AI agents play against each other. Stuff like that. For things like medical research, I would instead say we should keep the AI narrow and non-agentic. The need for carefully blinded simulations is more about researching the limits of intelligence and agency and self-improvement where you are unsure what might emerge next and want to make sure you can study the results safely before risking releasing them.

Is there some sort of support group for those of us who are taking the idea that our civilization is in a dead end seriously and can't do much to help on the frontlines?

3Jeffrey Ladish
I think that would be a really good thing to have! I don't know if anything like that exists, but I would love to see one

Well, obviously, it won't be consolation enough, but I can certainly revel in some human warmth inside by knowing I'm not alone in feeling like this.

As a bystander who can understand this, and find the arguments and conclusions sound, I must say I feel very hopeless and "kinda" scared at this point. I'm living in at least an environment, if not a world, where even explaining something comparatively simple like how life extension is a net good is a struggle. Explaining or discussing this is definitely impossible - I've tried with the cleverer, more transhumanistic/rationalistic minded people I know, and it just doesn't click for them, to the contrary, I find people like to push in the other direction, a... (read more)

4elioll
Vincent Fagot: Where do you live (in general terms if you can provide it, feel free not to dox yourself if you don't want to)? I live in countryside Brazil, so I can strongly relate.

This might sound absurd, but I legit think that there's something that most people can do. Being something like radically publicly honest and radically forgiving and radically threat-aware, in your personal life, could contribute to causing society in general to be radically honest and forgiving and threat-aware, which might allow people poised to press the Start button on AGI to back off. 

ETA: In general, try to behave in a way such that if everyone behaved that way, the barriers to AGI researchers noticing that they're heading towards ending the wor... (read more)

If it's any consolation, you would not feel more powerful or less scared if you were myself.