LESSWRONG
LW

1539
Raemon
61497Ω8005058851311
Message
Dialogue
Subscribe

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Status Is The Game Of The Losers' Bracket
Raemon3h40

Yeah makes sense that the Moral Maze Middle Managers are sociopaths, but, I  think The Office middle managers are still clearly following status gradients in a straightforward way. 

Reply
Status Is The Game Of The Losers' Bracket
Raemon5h40

Mmm, I think disagree about clueless here – clueless are middle management, who are following a status ladder pretty straightforwardly.

Reply
Elizabeth's Shortform
Raemon5h44

I think it's true there can be useful things about listening to bad faith internet trolls, but, I do kinda think you can save the world mostly without interact with bad faith internet trolls (unless you have some additional reason to take them seriously). 

(the "at least as annoying as John" and "NOT at least as annoying as openly sneering internet trolls" is an empirical belief based on the contingent state of the rationalsphere and professional world and broader world. I don't think the internet trolls are actually a good use of your time, en net)

Reply
Aim for single piece flow
Raemon5h20

Makes sense, it is possible that after illness were factored out it wouldn't seem so obvious to me.

Reply
Aim for single piece flow
Raemon1d20

I'm curious if we can somehow operationalize a bet between Lightcone-ish-folk and you/Adam. I think agree the social-environment-distortion is an important cost. I do think it's probably necessary for genius thinkers to have a period of time where they are thinking alone. 

But, I do think there are also important benefits to publishing more, esp. if you can develop an internal locus of "what's important". I also think doing things like "just publish on your private blog rather than LessWrong, such that a smaller number of higher-context people can way in" would help.

But, my gut says pretty strongly that you and Adam are erring way too far in the not-publishing direction, and like, I would pay money for you to publish more. 

Reply1
Raemon's Shortform
Raemon2d4842

I feel so happy that "what's your crux?" / "is that cruxy" is common parlance on LW now, it is a meaningful improvement over the prior discourse. Thank you CFAR and whoever was part of the generation story of that.

Reply31
New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Raemon2d*20

Subcruxes of mine here:

  • I think, by the time any kind of international deal goes through, we will basically have already reached the frontier of what was safe, so it feels like splitting hairs, discussing whether the regime should want more capabilities in the immediate future.
    • (surely it's going to take at least a year, which is a pretty long time, it's probably not going to happen at all, and 3 years to even get started is more like what I imagine when I imagine a very successful overton-smashing campaign)
  • I think there's tons of research augmentation you can do 1-3-year-from-now AI, that are more about leveraging the existing capabilities than getting fundamentally smarter
  • I don't buy that there's a way to get end-to-end research, or "fundamentally smarter" research assistants, that aren't unacceptably dangerous at scale. (i.e. I believe you can train on more specific ). (man I have no idea what I meant by that sentence fragment. sorry, person who reacted with "?")

Do those feel like subcruxes for you, or are there other ones?

Reply1
New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Raemon2d20

Is there a decent chance an AI takeover is relatively nice? 

> This is an existential catastrophe IMO and should desperately avoided, even if they do leave us a solar system or w/e. 

Actually, I think this maybe wasn't cruxy for anyone. I think @ryan_greenblatt said he agreed it didn't change the strategic picture, it just changed some background expectations. 

(I maybe don't believe him that he doesn't think it affects the strategic picture? It seemed like his view was fairly sensitive to various things being like 30% likely instead of like 5% or <1%, and it feels like it's part of an overall optimistic package that adds up to being more willing to roll the dice on current proposals? But, I'd probably believe him if he reads his paragraph and is like "I have thought about whether this is a (maybe subconscious) motivation/crux and am confident it isn't)

Reply
New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Raemon2d20

If the international governing body starts approving AI development, then aren't we basically just back in the plan A regime?

I think MIRI's plan is clearly meant to eventually build superintelligence, given that they've stated various times it'd be an existential catastrophe if this never happened – they just think it should happen after a lot of augmentation and carefulness.

A lot of my point here is I just don't really see much difference between Plan A and Shutdown except for "once you've established some real control over AI racing, what outcome are you shooting for nearterm?", and I'm confused why Plan A advocates see it as substantially different. 

(Or, I think the actual differences are more about "how you expect it to play out in practice, esp. if MIRI-style folk end up being a significant political force." Which is maybe fair, but, it's not about the core proposal IMO.)

"We wouldn't want to pause 30 years, and then do a takeoff very quickly – it's probably better to do a smoother takeoff."

> huh, this one seems kinda relevant to me. 

Do you understand why I don't understand why you think that? Like, the MIRI plan is clearly aimed at eventually building superintelligence (I realize the literal treaty doesn't emphasize that, but, it's clear from very public writing in IABIED that it's part of the goal), and I think it's pretty agnostic over exactly how that shakes out.

Reply
Daniel Kokotajlo's Shortform
Raemon2d2025

You... could publish it as a top-level linkpost!

Reply
Load More
22Raemon's Shortform
Ω
8y
Ω
718
23What are your impossible problems?
6d
23
51Orient Speed in the 21st Century
7d
12
53One Shot Singalonging is an attitude, not a skill or a song-difficulty-level*
12d
11
44Solstice Season 2025: Ritual Roundup & Megameetups
14d
8
42Being "Usefully Concrete"
16d
4
59"What's hard about this? What can I do about that?"
18d
0
130Re-rolling environment
19d
2
50Mottes and Baileys in AI discourse
23d
9
20Early stage goal-directednesss
1mo
8
77"Intelligence" -> "Relentless, Creative Resourcefulness"
1mo
28
Load More
Step by Step Metacognition
Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)
AI Consciousness
3 months ago
AI Auditing
4 months ago
(+25)
AI Auditing
4 months ago
Guide to the LessWrong Editor
7 months ago
Guide to the LessWrong Editor
7 months ago
Guide to the LessWrong Editor
7 months ago
Guide to the LessWrong Editor
7 months ago
(+317)
Sandbagging (AI)
8 months ago
Sandbagging (AI)
8 months ago
(+88)
AI "Agent" Scaffolds
8 months ago
Load More