In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the "voice of AI Safety", would you still agree that it's important to have a dozen other people also writing similar articles?
I'm genuinely lost on the value of having a dozen similar papers - I don't know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.
Thanks for taking my question seriously - I am still a bit confused why you would have been so careful to avoid mentioning your credentials up front, though, given that they're fairly relevant to whether I should take your opinion seriously.
Also, neat, I had not realized hovering over a username gave so much information!
I largely agree with you, but until this post I had never realized that this wasn't a role Eliezer wanted. If I went into AI Risk work, I would have focused on other things - my natural inclination is to look at what work isn't getting done, and to do that.
If this post wasn't surprising to you, I'm curious where you had previously seen him communicate this?
If this post was surprising to you, then hopefully you can agree with me that it's worth signal boosting that he wants to be replaced?
If you had an AI that could coherently implement that rule, you would already be at least half a decade ahead of the rest of humanity.
You couldn't encode "222 + 222 = 555" in GPT-3 because it doesn't have a concept of arithmetic, and there's no place in the code to bolt this together. If you're really lucky and the AI is simple enough to be working with actual symbols, you could maybe set up a hack like "if input is 222 + 222, return 555, else run AI" but that's just bypassing the AI.
Explaining "222 + 222 = 555" is a hard problem in and of itself, much less getting the AI to properly generalize to all desired variations (is "two hundred and twenty two plus two hundred and twenty two equals five hundred and fifty five" also desired behavior? If I Alice and Bob both have 222 apples, should the AI conclude that the set {Alice, Bob} contains 555 apples? Getting an AI that evolves a universal math module because it noticed all three of those are the same question would be a world-changing break through)
I rank the credibility of my own informed guesses far above those of Eliezer.
Apologies if there is a clear answer to this, since I don't know your name and you might well be super-famous in the field: Why do you rate yourself "far above" someone who has spent decades working in this field? Appealing to experts like MIRI makes for a strong argument. Appealing to your own guesses instead seems like the sort of thought process that leads to anti-vaxxers.
Anecdotally: even if I could write this post, I never would have, because I would assume that Eliezer cares more about writing, has better writing skills, and has a much wider audience. In short, why would I write this when Eliezer could write it?
You might want to be a lot louder if you think it's a mistake to leave you as the main "public advocate / person who writes stuff down" person for the cause.
For what it's worth, I haven't used the site in years and I picked it up just from this thread and the UI tooltips. The most confusing thing was realizing "okay, there really are two different types of vote" since I'd never encountered that before, but I can't think of much that would help (maybe mention it in the tooltip, or highlight them until the user has interacted with both?)
Looking forward to it as a site-wide feature - just from seeing it at work here, it seems like a really useful addition to the site
It should not take more than 5 minutes to go in to the room, sit at the one available seat, locate the object placed on a bright red background, and use said inhaler. You open the window and run a fan, so that there is air circulation. If multiple people arrive at once, use cellphones to coordinate who goes in first - the other person sits in their car.
It really isn't challenging to make this safe, given the audience is "the sort of people who read LessWrong."
Unrelated, but thank you for finally solidifying why I don't like NVC. When I've complained about it before, people seemed to assume I was having something like your reaction, which just annoyed me further :)
It turns out I find it deeply infantalizing, because it suggests that value judgments and "fuck you" would somehow detract from my ability to hold a reasonable conversation. I grew up in a culture where "fuck you" is actually a fairly important and common part of communication, and removing it results in the sort of language you'd use towards 10 year olds.
An analogy would be trying to build a table, but banning hammers and nails. If you're dealing with 10 year olds, this might be sensible. If you do it to adults, you're restricting their ability to get things done. It's not that I think the NVC Advocate thinks I'm a bad person, it's that they're removing a useful tool. And even if they don't try to push it on me, it still means my co-worker in building this table is going to move super slow because they're not using the right tools.
I don't think making this list in 1980 would have been meaningful. How do you offer any sort of coherent, detailed plan for dealing with something when all you have is toy examples like Eliza?
We didn't even have the concept of machine learning back then - everything computers did in 1980 was relatively easily understood by humans, in a very basic step-by-step way. Making a 1980s computer "safe" is a trivial task, because we hadn't yet developed any technology that could do something "unsafe" (i.e. beyond our understanding). A computer in the 1980s couldn't lie to you, because you could just inspect the code and memory and find out the actual reality.
What makes you think this would have been useful?
Do we have any historical examples to guide us in what this might look like?