LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Mmm, I think disagree about clueless here – clueless are middle management, who are following a status ladder pretty straightforwardly.
I think it's true there can be useful things about listening to bad faith internet trolls, but, I do kinda think you can save the world mostly without interact with bad faith internet trolls (unless you have some additional reason to take them seriously).
(the "at least as annoying as John" and "NOT at least as annoying as openly sneering internet trolls" is an empirical belief based on the contingent state of the rationalsphere and professional world and broader world. I don't think the internet trolls are actually a good use of your time, en net)
Makes sense, it is possible that after illness were factored out it wouldn't seem so obvious to me.
I'm curious if we can somehow operationalize a bet between Lightcone-ish-folk and you/Adam. I think agree the social-environment-distortion is an important cost. I do think it's probably necessary for genius thinkers to have a period of time where they are thinking alone.
But, I do think there are also important benefits to publishing more, esp. if you can develop an internal locus of "what's important". I also think doing things like "just publish on your private blog rather than LessWrong, such that a smaller number of higher-context people can way in" would help.
But, my gut says pretty strongly that you and Adam are erring way too far in the not-publishing direction, and like, I would pay money for you to publish more.
Subcruxes of mine here:
Do those feel like subcruxes for you, or are there other ones?
Is there a decent chance an AI takeover is relatively nice?
> This is an existential catastrophe IMO and should desperately avoided, even if they do leave us a solar system or w/e.
Actually, I think this maybe wasn't cruxy for anyone. I think @ryan_greenblatt said he agreed it didn't change the strategic picture, it just changed some background expectations.
(I maybe don't believe him that he doesn't think it affects the strategic picture? It seemed like his view was fairly sensitive to various things being like 30% likely instead of like 5% or <1%, and it feels like it's part of an overall optimistic package that adds up to being more willing to roll the dice on current proposals? But, I'd probably believe him if he reads his paragraph and is like "I have thought about whether this is a (maybe subconscious) motivation/crux and am confident it isn't)
If the international governing body starts approving AI development, then aren't we basically just back in the plan A regime?
I think MIRI's plan is clearly meant to eventually build superintelligence, given that they've stated various times it'd be an existential catastrophe if this never happened – they just think it should happen after a lot of augmentation and carefulness.
A lot of my point here is I just don't really see much difference between Plan A and Shutdown except for "once you've established some real control over AI racing, what outcome are you shooting for nearterm?", and I'm confused why Plan A advocates see it as substantially different.
(Or, I think the actual differences are more about "how you expect it to play out in practice, esp. if MIRI-style folk end up being a significant political force." Which is maybe fair, but, it's not about the core proposal IMO.)
"We wouldn't want to pause 30 years, and then do a takeoff very quickly – it's probably better to do a smoother takeoff."
> huh, this one seems kinda relevant to me.
Do you understand why I don't understand why you think that? Like, the MIRI plan is clearly aimed at eventually building superintelligence (I realize the literal treaty doesn't emphasize that, but, it's clear from very public writing in IABIED that it's part of the goal), and I think it's pretty agnostic over exactly how that shakes out.
You... could publish it as a top-level linkpost!
Yeah makes sense that the Moral Maze Middle Managers are sociopaths, but, I think The Office middle managers are still clearly following status gradients in a straightforward way.