Since you're here on LW and given your past posts, I can maybe guess, but: In your opinion, what's the next item in the sequence?
All possible outcomes, yes! I think the jury question is important, however much it might end up being in some sense silly on the merits. There's a lot of implementation details that can go wrong.
To add one more thought - done well, there can also be value in 24/7 availability, consistent customer experience, and never getting a busy signal or put on hold.
I second the other commentors questioning what, precisely, it is that this distinction is meant to accomplish. And also pointing out the very human and instrumental reasons why movements and causing end up converging on similar structures and dynamics.
To add to that, recall that It is the chief characteristic of the religion of science that it works. Among religions, there are different shades/degrees of how much they reflect the reasons we have concerns about the things we call religions. They come in different shades of gray. Which is exactly why that linked post mentions that "If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars."
Rationality aims (and claims) to be more in the vein of science in this way, and less in the vein of Christianity.
Agreed. Reminds me of something my sensei used to say: It's useful to strike first, but far better to strike last.
Thanks for the analysis, but I think this is only looking at about half the equation.
Does the AI stay on-script in ways that shorten call duration? Or otherwise improve company economics (aka efficiently denying refunds/returns/warranty claims/etc.; or successfully generating conversions to sales, service plans, or whatever; or successfully solving customer problems on the first call)?
I do agree with that. I also think it might be worth diverting a rather small percentage of effort towards figuring out what we actually want from and for AI development, in the worlds where that turns out to be possible. At the very least, we can generate some better training data and give models higher-quality feedback.
You know, with the funding numbers involved, there's at least a half dozen companies and a dozen governments that could each unilaterally say, "We're hiring 10,000 philosophers and other humanities scholars and social scientists to work on this, apply here." None of them have done so.
Worth noting that Dumbledore himself is also not the king on his own chessboard. He deliberately removes himself after setting things up so that, in some sense, he continues to get to make more moves anyway, possibly even up to and including his own future return.
If the job interview was too easy, perhaps you don’t want the job.
I'm curious if this is an experience most people agree with. In my experience, by the time I get an interview at all I've already passed the key filters, and usually find the actual interview(s) fairly straightforward. Otherwise companies assume I'm overqualified or decide I fail to tick some unimportant box and I never make it to the "interact with a human" stage.
This is great!
What do you think of UVC lamps that you just kind of stick into a hole in your HVAC intake ducts? Some of them are really cheap, most seem to be 254nm, no idea if any are any good. Would be really convenient if it works well.