Hi there, my background is in AI research and recently I have discovered some AI Alignment communities centered around here. The more I read about AI Alignment, the more I have a feeling that the whole field is basically a fictional-world-building exercise.
Some problems I have noticed: The basic concepts (e.g. what are the basic properties of the AI that are being discussed) are left undefined. The questions answered are build on unrealistic premises about how AI systems might work. Mathiness - using vaguely defined mathematical terms to describe complex problems and then solving them with additional vaguely defined mathematical operations. Combination of mathematical thinking and hand-wavy reasoning that lead to preferred conclusions.
Maybe I am reading it wrong. How would you steelman the argument that AI Alignment is actually a rigorous field? Do you consider AI Alignment to be scientific? If so, how is it Popper-falsifiable?
I couldn't click upvote hard enough. I'm always having this mental argument with hypothetical steelmanned opponents about stuff and AI Safety is sometimes one of the subjects. Now I've got a great piece of text to forward to these imaginary people I'm arguing with!
"pseudoscience" is a kind of word that is both too broad and loaded with too many negative connotations. It encompasses both (say) intelligent design with it's desired results built-in and AI safety striving towards ...something. The word doesn't seem useful in determining which you should take seriously.
I feel like I've read a post before about distinguishing between insert-some-pseudoscience-poppycock-here, and a "pseudoscience" like AI safety. Or, someone should write that post!