Wiki Contributions

Comments

Sorted by
Verden3724

It's funny that this post has probably made me feel more doomy about AI risk than any other LW post published this year. Perhaps for no particularly good reason. There's just something really disturbing to me about seeing a vivid case where folks like Jacob, Eli and Samotsvety, apparently along with many others, predict a tiny chance that a certain thing in AI progress will happen (by a certain time), and then it just... happens.

Verden0-1

I'm not totally sure what you're referring to, but if you're talking about Paul's guess of "~15% on singularity by 2030 and ~40% on singularity by 2040", then I want to point out that looking at these two questions, his prediction seems in line with the Metaculus community prediction

Answer by Verden240

Scott Aaronson recently wrote something relevant to these issues:

Max Ra: What would change your mind to explore research on the AI alignment problem? For a week? A month? A semester?

Scott: The central thing would be finding an actual potentially-answerable technical question around AI alignment, even just a small one, that piqued my interest and that I felt like I had an unusual angle on. In general, I have an absolutely terrible track record at working on topics because I abstractly feel like I “should” work on them. My entire scientific career has basically just been letting myself get nerd-sniped by one puzzle after the next.

 

Matt Putz: [...] do you think that money could ever motivate you to work on AI Alignment. If it was enough money? Can you imagine any amount that would make you say “okay, at this point I’ll switch, I’ll make a full-hearted effort to actually think about this for a year, I’d be crazy to do anything else”. If so, do you feel comfortable sharing that amount (even if it’s astronomically high)?

Scott: For me personally, it’s not about money. For my family, I think a mere, say, $500k could be enough for me to justify to them why I was going on leave from UT Austin for a year to work on AI alignment problems, if there were some team that actually had interesting problems to which I could contribute something.

 

Shmi: I’d guess that to get attention of someone like Scott, one would have to ask a question that sound like (but make more sense than) “what is the separation of complexity classes between aligned and unaligned AI in a particular well defined setup?” or “A potential isomorphism between Eliciting Latent Knowledge and termination of string rewriting” or “Calculating SmartVault action sequences with matrix permanent”

Scott: LOL, yes, that’s precisely the sort of thing it would take to get me interested, as opposed to feeling like I really ought to be interested.

There is also a question on EA Forum about the same issue: What are the coolest topics in AI safety, to a hopelessly pure mathematician?

I wonder how valuable it would be to have a high quality post or sequence on open problems in AI alignment that is substantially optimized for nerd sniping. Is it even possible to make something like this?

Verden40

Can someone explain to me why we don't see people with differing complex views on something placing bets in a similar fashion more often? 

Verden30

Would it be helpful to think about something like "what Brier score will a person in the reference class of "people-similar-to-Eliezer_2022-in-all-relevant-ways" have after making a bunch of predictions on Metaculus?" Perhaps we should set up this sort of question on Metaculus or Manifold? Though I would probably refrain from explicitly mentioning Eliezer in it.