If this happens, it could lead to a lot of AI researchers looking for jobs. Depending on the incentives at the time and the degree to which their skills are transferable, many of them could move into safety-related work.
I really like this idea, especially the part about doing it on Baffin Island. A few questions/comments/concerns
Do you do this during conversation or just during lectures? I feel like I should perhaps start doing this in lectures, although I might feel some qualms about recording a speaker without permission.
Interesting! Have you noticed that people repeat more or less than the past 20 seconds when you request that they repeat the past 20 seconds? I feel like I would find that more difficult to accurately measure 20 seconds of conversation than if someone told me to repeat everything I said after <particular talking point>. I don't think the difficult gap is huge, though, and I'm not sure if this is the case for most people.
I struggle with this frequently. Of course, in many cases I waltz into a talk where (I think that) the rest of the audience knows more than me, and in those cases I don't say anything. The best solution I've seen is to first build up a ton of social credit and then ask tons of questions. I've seen a few cases of fancy professors asking very basic questions that I was too afraid to ask, and knowing that nobody thought they were stupid afterwards.
If you feel like you're in danger of giving this talk at the beginning, it might be best to explicitly say at the beginning that you encourage all questions, even if they're naive. I recall going to a talk where the speaker did this twice, lots of people asked questions, and I learned that the axes of a graph meant something different than I first assumed.
On the weirder side of solutions, you could try classically conditioning yourself not to take embarrassment so poorly. If you're a sugar fiend, bring some candies to a talk and eat one for each question you ask.
I also struggle with the fact that sometimes during a talk, I zone out and don't know whether the speaker already answered the question I have, precisely because I was zoned out when they might have answered the question. In this case, I tend not to ask the question, since I don't want to take the time away from other people listening to the talk.
Yeah, this sounds very reasonable. However, in a situation where the speaker won't take offense, I think specifying the reason for why you requested to repeat something could be nice. Sometimes people take "could you repeat this" to mean "could you summarize the last few minutes" or "I didn't understand, could you explain in more detail". Of course, this is a pretty minor cost, and it's better to ask someone to repeat things without saying why than to not ask at all.
I think with a decent training set, this could make a pretty nice Anki deck. The difficulty in this would be getting the data and accurate emotional expression labels.
A few ideas:
1. Pay highschool/college drama students to fake expressions. The quality of the data would be limited by their acting skill, but you could get honest labels.
2. Gather up some participants and expose them to a variety of things, taking pictures of them under different emotional states. This could run into the problem of people misreporting their actual emotional state. Learning with these might make the user more susceptible to deception.
3. Screenshot expressions from movies/videos where the emotional state of the subjects are clear from context.
How did you do this? Did you simply ask yourself "how does this person feel" in a social context? Did you get feedback through asking people how they felt afterward? If so, how do you deal with detecting states of mind that others are unlikely to openly admit (e.g. embarrassment, hostility, idolization)?
Wow, this story is disturbingly well-written. While there aren't any explicit references to slavery, I can't help but be reminded of Frederick Douglass's description of Mr. Severe and Mr. Gore, two spiteful and vicious overseers at a plantation.
Continuing with this, I'm also reminded of Douglass's argument that slavery had a terrible effect not only on the slaves but on the slaveowners and overseers too. Specifically, that it somehow awakened a brutality in many of them that otherwise wouldn't be there. I wonder if this story's narrator would have been just as cruel if he never had this kind of power over others.
I'm also now somewhat concerned about the higher-order effects/social risks of people having absolute power over AIs, even if the AIs don't suffer. Even access to an AI which only appears to suffer might bring out a kind of cruelty in some people that they otherwise wouldn't have known they were capable of.
Another way to assess the efficacy of ML-generated molecules would be through physics-based methods. For instance, binding-free-energy calculations which estimate how well a molecule binds to a specific part of a protein can be made quite accurate. Currently, they're not used very often because of the computational cost, but this could be much less prohibitive as chips get faster (or ASICs for MD become easier to get) and so the models could explore chemical space without being restricted to only getting feedback from synthetically accessable molecules.