If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I endorse Lumifer's quibble about the field of computer security, with the caveat that often the fact that the risks happen inside computer systems is much more important than the fact that they come from people.
The sort of "value alignment" questions MIRI professes (I think sincerely) to worry about seem to me a long way away from computer security, and plausibly relevant to future AI safety. But it could well be that if AI safety really depends on nailing that sort of thing down then we're unfixably screwed and we should therefore concentrate on problems there is at least some possibility of solving...
I think my point wasn't about what computer security precisely does, but about the mindset of people who do it (security people cultivate an adversarial point of view about systems).
My secondary point is that computer security is a very solid field, and doesn't look wishy washy or science fictiony. It has serious conferences, it has research centers, industry labs, intellectual firepower, etc.