Vadim Fomin
Vadim Fomin has not written any posts yet.

Vadim Fomin has not written any posts yet.

I have never done cryptography, but the way I imagine working in it is that it exists in a context of extremely resourceful adversarial agents, and thus you have to give up a kind of casual, not quite noticed neglect toward extremely weird and artificial-sounding edge cases / seemingly weird and unlikely scenarios, because this is where the danger lives: your adversaries may force these weird edge cases to happen, and this is a part of the system's behavior you haven't sufficiently thought through.
Maybe one possible analogy with AI alignment, at least, is that we're also talking about potential extremely resourceful agents that are adversarial until we've actually solved alignment, so we're... (read more)
Council of Europe ... (and Russia is in, believe it or not).
It's not. It was Yeltsin trying to get in in the nineties, and then Russia was excluded in 2022.
What is the connection between the concepts of intelligence and optimization?
I see that optimization implies intelligence (that optimizing sufficiently hard task sufficiently well requires sufficient intelligence). But it feels like the case for existential risk from superintelligence is dependent on the idea that intelligence is optimization, or implies optimization, or something like that. (If I remember correctly, sometimes people suggest creating "non-agentic AI", or "AI with no goals/utility", and EY says that they are trying to invent non-wet water or something like that?)
It makes sense if we describe intelligence as a general problem-solving ability. But intuitively, intelligence is also about making good models of the world, which sounds like it could be... (read more)
Is there currently any place for possibly stupid or naive questions about alignment? I don't wish to bother people with questions that have probably been addressed, but I don't always know where to look for existing approaches to a question I have.
... (read 512 more words →)The OpenBSD project to build a secure operating system has also, in passing, built an extremely robust operating system, because from their perspective any bug that potentially crashes the system is considered a critical security hole. An ordinary paranoid sees an input that crashes the system and thinks, “A crash isn't as bad as somebody stealing my data. Until you demonstrate to me that this bug can be used by the adversary to steal data, it's not extremely critical.” Somebody with security mindset thinks, “Nothing inside this subsystem is supposed to behave in a way that crashes the OS. Some section of code is behaving in a way that does not work
Sorry, I know this is tangential, but I'm curious — is it based on it being less psychosis-inducing in this investigation or are there more data points / is it known to be otherwise more aligned as well?