All of Vadim Fomin's Comments + Replies

What is the connection between the concepts of intelligence and optimization?

I see that optimization implies intelligence (that optimizing sufficiently hard task sufficiently well requires sufficient intelligence). But it feels like the case for existential risk from superintelligence is dependent on the idea that intelligence is optimization, or implies optimization, or something like that. (If I remember correctly, sometimes people suggest creating "non-agentic AI", or "AI with no goals/utility", and EY says that they are trying to invent non-wet water o... (read more)

1[anonymous]
The idea is that agentic AIs are probably generally more effective at doing things: https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ 

Is there currently any place for possibly stupid or naive questions about alignment? I don't wish to bother people with questions that have probably been addressed, but I don't always know where to look for existing approaches to a question I have.

5ryan_b
Just yesterday someone opened a thread for precisely that: https://www.lesswrong.com/posts/wqeStKQ3PGzZaeoje/all-agi-safety-questions-welcome-especially-basic-ones-april-1

The OpenBSD project to build a secure operating system has also, in passing, built an extremely robust operating system, because from their perspective any bug that potentially crashes the system is considered a critical security hole. An ordinary paranoid sees an input that crashes the system and thinks, “A crash isn't as bad as somebody stealing my data. Until you demonstrate to me that this bug can be used by the adversary to steal data, it's not extremely critical.” Somebody with security mindset thinks, “Nothing inside this subsystem is supposed to be

... (read more)