Please don't throw your mind away
Dialogue [Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.] Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following. Tsvi: "So, what's been catching your eye about this stuff?" Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way." T: "What's something that got your interest in ML?" A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project." T: "That makes sense. I guess I'm fairly skeptical of AGI coming very soon, compared to people around here, or at least I'm skeptical that most people have good reasons for believing that. Also I think it's pretty valuable to not cut yourself off from thinking about the whole alignment problem, whether or not you expect to work on an already-existing project. But what you're saying makes sense too. I'm curious though if there's something you were thinking about recently that just strikes you as fun, or like it's in the back of your mind a bit, even if you're not trying to think about it for some purpose." A: "Hm... Oh, I saw this video of an octopus doing a really weird swirly thing. Here, let me pull it up on my phone." T: "Weird! Maybe it's cleaning itself, like a cat licking its fur? But it doesn't look like it's actually contacting itself that much." A: "I thought it might be a signaling display, like a mating dance, or for scaring off predators by looking like a bi
Very little / hard to evaluate. I have been doing my best to carefully avoid saying things like "do math/science research", unless speaking really loosely, because I believe that's quite a poor category. It's like "programming"; sure, there's a lot in common between writing a CRUD app or tweaking a UI, but it's really not the same thing as "think of a genuinely novel algorithm and implement it effectively in context". Quoting from https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce#_We_just_need_X__intuitions :
... (read more)