Wiki Contributions

Comments

Sorted by
joec10

Another way to assess the efficacy of ML-generated molecules would be through physics-based methods. For instance, binding-free-energy calculations which estimate how well a molecule binds to a specific part of a protein can be made quite accurate. Currently, they're not used very often because of the computational cost, but this could be much less prohibitive as chips get faster (or ASICs for MD become easier to get) and so the models could explore chemical space without being restricted to only getting feedback from synthetically accessable molecules.

Answer by joec00

If this happens, it could lead to a lot of AI researchers looking for jobs. Depending on the incentives at the time and the degree to which their skills are transferable, many of them could move into safety-related work.

joec10

I really like this idea, especially the part about doing it on Baffin Island. A few questions/comments/concerns

  1. During the winter, the polar ice cap expands to the point that Baffin Island is surrounded by ice. This makes shipping things to and from the island difficult for a large part of the year. I also imagine most people don't want to be there during the winter to check up on things. Do you imagine things progressing more slowly during the winter because of this?
  2. Looking at the climate data for Baffin island and comparing mean daily maximum in July and mean daily minimum in January, it looks like there's a range of about 40 Celsius, which seems significant (and over a range pretty different from what most engineers are used to building for). Do you expect this will interfere with the equipment? Will the Autofacs need some kind of temperature control?
  3. Do you have any ideas for how to deal with defective/broken AutoFacs? My first thought is that you could automatically disassemble them, throw away the defective parts and use the working parts to build new Autofacs. There's probably something more clever.
  4. Will the AutoFacs be able to clean themselves or fix the other normal, small things that worsen performance as machinery operates? If so, how?
joec10

Do you do this during conversation or just during lectures? I feel like I should perhaps start doing this in lectures, although I might feel some qualms about recording a speaker without permission. 

joec10

Interesting! Have you noticed that people repeat more or less than the past 20 seconds when you request that they repeat the past 20 seconds? I feel like I would find that more difficult to accurately measure 20 seconds of conversation than if someone told me to repeat everything I said after <particular talking point>. I don't think the difficult gap is huge, though, and I'm not sure if this is the case for most people.

joec20

I struggle with this frequently. Of course, in many cases I waltz into a talk where (I think that) the rest of the audience knows more than me, and in those cases I don't say anything. The best solution I've seen is to first build up a ton of social credit and then ask tons of questions. I've seen a few cases of fancy professors asking very basic questions that I was too afraid to ask, and knowing that nobody thought they were stupid afterwards.

If you feel like you're in danger of giving this talk at the beginning, it might be best to explicitly say at the beginning that you encourage all questions, even if they're naive. I recall going to a talk where the speaker did this twice, lots of people asked questions, and I learned that the axes of a graph meant something different than I first assumed.

On the weirder side of solutions, you could try classically conditioning yourself not to take embarrassment so poorly. If you're a sugar fiend, bring some candies to a talk and eat one for each question you ask.

I also struggle with the fact that sometimes during a talk, I zone out and don't know whether the speaker already answered the question I have, precisely because I was zoned out when they might have answered the question. In this case, I tend not to ask the question, since I don't want to take the time away from other people listening to the talk.

joec21

Yeah, this sounds very reasonable. However, in a situation where the speaker won't take offense, I think specifying the reason for why you requested to repeat something could be nice. Sometimes people take "could you repeat this" to mean "could you summarize the last few minutes" or "I didn't understand, could you explain in more detail". Of course, this is a pretty minor cost, and it's better to ask someone to repeat things without saying why than to not ask at all.

Answer by joec20

I think with a decent training set, this could make a pretty nice Anki deck. The difficulty in this would be getting the data and accurate emotional expression labels.
A few ideas:

1. Pay highschool/college drama students to fake expressions. The quality of the data would be limited by their acting skill, but you could get honest labels.

2. Gather up some participants and expose them to a variety of things, taking pictures of them under different emotional states. This could run into the problem of people misreporting their actual emotional state. Learning with these might make the user more susceptible to deception.

3. Screenshot expressions from movies/videos where the emotional state of the subjects are clear from context.

joec40

How did you do this? Did you simply ask yourself "how does this person feel" in a social context? Did you get feedback through asking people how they felt afterward? If so, how do you deal with detecting states of mind that others are unlikely to openly admit (e.g. embarrassment, hostility, idolization)?

joec110

Wow, this story is disturbingly well-written. While there aren't any explicit references to slavery, I can't help but be reminded of Frederick Douglass's description of Mr. Severe and Mr. Gore, two spiteful and vicious overseers at a plantation.

Continuing with this, I'm also reminded of Douglass's argument that slavery had a terrible effect not only on the slaves but on the slaveowners and overseers too. Specifically, that it somehow awakened a brutality in many of them that otherwise wouldn't be there. I wonder if this story's narrator would have been just as cruel if he never had this kind of power over others.

I'm also now somewhat concerned about the higher-order effects/social risks of people having absolute power over AIs, even if the AIs don't suffer. Even access to an AI which only appears to suffer might bring out a kind of cruelty in some people that they otherwise wouldn't have known they were capable of.

Load More