LESSWRONG
LW

Ben Pace
34984Ω10882754525193
Message
Dialogue
Subscribe

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.
  • It is wrong to directly cause the end of the world. Even if you are fatalistic about what is going to happen.

(Longer bio.)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs
23Benito's Shortform Feed
Ω
7y
Ω
286
Analyzing A Critique Of The AI 2027 Timeline Forecasts
Ben Pace3h20

I certainly consider Gelman a valid example of the category :)

Reply
Analyzing A Critique Of The AI 2027 Timeline Forecasts
Ben Pace2d1210

I am both surprised and glad my comment led to an update :)

FWIW I never expect the political blogs to be playing by the good rules of the rest of the intellectual writing circles, I view them more as soldiers. Not central examples of soldiers, but enough so that I'd repeatedly be disappointed by them if I expected them to hold themselves to the same standards.

(As an example, in my mind I confidently-but-vaguely recall some Matt Yglesias tweets where he endorsed dishonesty for his side of the political on some meta-level, in order to win political conflicts; interested if anyone else recalls this / has a link.)

Reply
Analyzing A Critique Of The AI 2027 Timeline Forecasts
Ben Pace2d20

"Moldbug sold out" is definitely an attack on someone's status. I still prefer it, because it makes a concrete claim about why. For instance, if the AI 2027 critique post title was "AI 2027's Graphs Are Made Up And Unjustified" this would feel to me much better than something only about status like "AI 2027's Timeline Forecasts Are Bad".

Added: I searched through a bunch of ACX archives specifically for the word 'bad' in titles, I think both titles make a substantive claim about what is bad (Bad Definitions Of "Democracy" And "Accountability" Shade Into Totalitarianism and Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs, the latter of which is slightly sarcastic while making the object level claim that the AI companies cannot control their AIs).

Added2: It was easier to search the complete SSC history for 'bad'. The examples are Bad Dreams, How Bad Are Things?, Asymmetric Weapons Gone Bad, and Response To Comments: The Tax Bill Is Still Very Bad, which was the sequel to The Tax Bill Compared To Other Very Expensive Things. The last one is the only one similar to what we're discussing here, but in-context it is said in response to his commenters and as a sequel to a post which did a substantive thing, the title was not the primary thesis for the rest of the internet, which again seems different to me.

Reply
Analyzing A Critique Of The AI 2027 Timeline Forecasts
Ben Pace3d5-3

(FWIW in this comment I am largely just repeating things already said in the longer thread... I wrote this mostly to clarify my own thinking.)

I think the conflict here is that, within intellectual online writing circles, attempting to use the title of a post to directly attempt to set a bottom line in the status of something is defecting on a norm, but this is not so in the 'internet of beefs' rest of the world, where titles are readily used as cudgels in status fights.

Within the intellectual online writing circles, this is not a good goal for a title, and it's not something that AI 2027 did (or, like, something that ~any ACX post or ~any LW curated post does)[1]. This is not the same as "not putting your bottom line in the title", it's "don't attempt to directly write the bottom line about the status of something in your title".

I agree you're narrowly correct that it's acceptable to have goals for changing the status of various things, and it's good to push back on implying that that isn't allowed by any method. But I think Zvi did make the point that the title itself of the critique post attempted to do it using the title and that's not something AI 2027 did and is IMO defecting on a worthy truce in the intellectual online circles.

  1. ^

    To the best of my recollection. Can anyone think of counterexamples?

Reply
Consider chilling out in 2028
Ben Pace4d40

but most of my work is very meaningful and what i want to be doing

i don't want to see paris or play the new zelda game more than i want to make lessonline happen

Reply
Eric Neyman's Shortform
Ben Pace4d20

I wonder if we could've simply added to the sidebar some text saying "By promoting Soares & Yudkowsky's new book, we mean to say that it's a great piece of writing on an important+interesting question by some great LessWrong writers, but are not endorsing the content of the book as 'true'." 

Or shorter: "This promotion does not imply endorsement of object level claims, simply that we think it's a good intellectual contribution."

Or perhaps a longer thing in a hover-over / footnote.

Reply
Consider chilling out in 2028
Ben Pace5d20

Oh that makes sense, thanks. That seems more like a thing for people who's work comes from internal inspiration / is more artistic, and also for people who have personal or psychological frictions that cause them to burn out a lot when they do this sort of burst-y work.

I think a lot of my work is heavily pulled out of me be the rest of the world setting deadlines (e.g. users making demands, people arriving for an event, etc), and I can cause those sorts of projects to pull lots of work out of me more regularly. I also think I don't take that much damage from doing it.

Reply
Consider chilling out in 2028
Ben Pace5d93

Come 2028, I hope Less Wrong can seriously consider for instance retiring terms like "NPC" and "normie", and instead adopt a more humble and cooperative attitude toward the rest of the human race.

I am interested in links to the most prominent use of these terms (e.g. in highly-upvoted posts on LW, or by high-karma users). I have a hypothesis that it's only really used on the outskirts or on Twitter, and not amongst those who are more respected. 

I certainly don't use "NPC" (I mean I've probably ever used it, but I think probably only a handful of times) because it aggressively removes agency from people, and I think I've only used "normie" for the purpose of un-self-serious humor – I think it's about as anti-helpful term as "normal person" which (as I've written before) I believe should almost always be replaced by referring to a specific population.

Reply
Consider chilling out in 2028
Ben Pace5d2820

As a datapoint, none of this chilling out or sprinting hard discussion resonates with me. Internally I feel that I've been going about as hard as I know how to since around 2015, when I seriously got started on my own projects. I think I would be working about similarly hard if my timelines shortened by 5 years or lengthened by 15. I am doing what I want to do, I'm doing the best I can, and I'm mostly focusing on investing my life into building truth-seeking and world-saving infrastructure. I'm fixing all my psychological and social problems insofar as they're causing friction to my wants and intentions, and as a result I'm able to go much harder today than I was in 2015. I don't think effort is really a substantially varying factor in how good my output is or impact on the world. My mood/attitude is not especially dour and I'm not pouring blind hope into things I secretly know are dead ends. Sometimes I've been more depressed or had more burnout, but it's not been much to do with timelines and more about the local environment I've been working in or internal psychological mistakes. To be clear, I try to take as little vacation time at work as I psychologically can (like 2-4 weeks per year), but that's because there's so much great stuff for me to build over the next decade(s), and that'd be true if I had 30-year timelines.

I am sure other people are doing differently-well, but I would like to hear from such people about their experience of things (or for people here to link to others' writing). (I might also be more interested in the next Val post being an interview with someone, rather than broad advice.)

Added: I mean, I do sometimes work 70 hour weeks, and I sometimes work 50 hour weeks, but this isn't a simple internal setting I can adjust, it's way more a fact about what the work demands of me. I could work harder, but primarily by picking projects that require it and the external world is setting deadlines of me, not by "deciding" to work harder. (I've never really been able to make that decision, as far as I can quickly recall it's always failed whenever I've tried.)

Reply
Analyzing A Critique Of The AI 2027 Timeline Forecasts
Ben Pace5d20

Does it say this somewhere on the website?

Reply
Load More
37LessOnline 2025: Early Bird Tickets On Sale
3mo
5
20Open Thread Spring 2025
4mo
50
281Arbital has been imported to LessWrong
4mo
30
135The Failed Strategy of Artificial Intelligence Doomers
5mo
78
109Thread for Sense-Making on Recent Murders and How to Sanely Respond
5mo
146
83What are the good rationality films?
Q
7mo
Q
54
932024 Petrov Day Retrospective
9mo
25
136[Completed] The 2024 Petrov Day Scenario
9mo
114
55Thiel on AI & Racing with China
10mo
10
53Extended Interview with Zhukeepa on Religion
10mo
61
Load More
Adversarial Collaboration (Dispute Protocol)
6mo
Epistemology
7mo
(-454)
Epistemology
7mo
(+56/-56)
Epistemology
7mo
(+9/-4)
Epistemology
7mo
(+66/-553)
Petrov Day
9mo
(+714)