LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
I think it's better to say words that mean particular things than trying to fight a treadmill of super/superduper/hyper/etc
Partly because I don't think a Superintelligence by that definition is actually, intrinsically, that threatening. I think it is totally possible to build That without everyone dying.
The "It" that is not possible to build without everyone dying is an intelligence that is either overwhelmingly smarter than all humanity, or, a moderate non-superintelligence that is situationally aware with the element of surprise such that it can maneuver to become overwhelmingly smarter than humanity.
I think meanwhile there are good reasons for people to want to talk about various flavors of weak superintelligence, and trying to force them to use some other word for that seems doomed.
Yeah I don't super stand by the Neanderthal comment, was just grabbing an illustrative example.
I just did a heavy-thinking GPT-5 search, which said "we don't know for sure, there's some evidence that, on an individually they may have been comparably smart as us, but, we seem to have had the ability to acquire and share innovations." This might not be a direct intelligence thing, but, "having some infrastructure that makes you collectively smarter as a group" still counts for my purposes.
Nod. The point of it is it's easy to change lyrics/chords etc in a place with a single source of truth that updates slides and scripts and musician charts.
I ask because I'm building https://secularsolstice.vercel.app/ (warning, in beta, still janky in some places), which I'm in the process of trying to make a strict improvement over secularsolstice.github.io (while also playing nicely with secularsolstice.github.io as long as they are both in use. It does a daily import)
The goal is:
– be a repository of all solstice content
– make it very easy to transform that content into a lot of different obvious versions you might want (such as nice slides, and all-speeches script, and all-song-lyrics script, a printed program, etc)
– have a bunch of Musician Powertools (i.e. transposing songs, converting between song formats)
I currently have an almost-complete Bulk Import Your Solstice From Doc that works if your doc is an entire solstice program, and each element has a header-style font. I'm not sure if anyone would actually use it this year.
I'm aiming for a) it's an easy enough tool people might just use it to create their programs in the first place, and b) try to be interoperable with as many other formats as is practical so you can just use it for whichever bits are useful to you, and c) it's pretty easy to upload your program afterwards for posterity.
I'm basically interested in user-interviewing solstice organizers about it.
The format you have seems fairly easy for "bulk import individual speeches and songs" although probably somewhat annoying to import the whole program. (Although, actually it's maybe straightforward to do a pass of importing the speeches, then songs, then table of contents and have it try to autoselect appropriate speeches/songs you just uploaded)
I'm curious what your initial reaction is to the general idea, whether you'd find it useful for your own purposes, and whether you'd feel motivated to upload your program after the fact.
This is assuming ASI is positive expected lifespan.
(I think it's a bit wonky where, in most worlds, I think ASI kills everyone, but, in some worlds, it does radically improve longevity, probably more than 1000 but where I think you need some time-discounting. I think this means it substantially reduces the median lifespan but might also substantially increase the mean lifespan. I'm not sure what to make of that and can imagine it basically working out to what you say here, but, I think does depend on your specific beliefs about that)
Having now practiced this for 2 years: "Think It Faster" is most obviously useful when you identify a concrete new habit out of it, and then actually implement that habit. It has combined well with hiring a longterm Thinking Assistant who helps remind me of my habits.
I've run this exercise at workshops, where it produces interesting results locally, but, I suspect doesn't turn out to help as much longterm because people don't have good habit infrastructure.
Some habits I've gotten from this are more general (in particular "notice the blurry feeling of not-quite-knowing-what-to-do" or "notice 'flailingness'", followed by "deliberately strategize about figuring out what to do next").
A lot are more specific (i.e. most of the concepts and habits in Debugging for Mid Coders).
...
As I review this now, my biggest realization is: I mention 'the 5 minute version' at the end, almost as an afterthought. But the 5 minute version is the version I do ~3x per week, and is most of the source of value. The hour-long version still feels useful to do first, to develop a rich understanding of what is possible. But, it'd be kinda embarrassing if I never tried out a "Think It Faster" lesson plan that just started with a 5 minute version and find out if the 60-minute version is basically unnecessary.
So I'm thinking about that now, and if I run more workshops will try out some variants.
...
The original Think It Faster tweet ends with Eliezer saying:
Every time I complete a chain of thought that took what my intuition says was a lot of time, I look back and review and ask myself "How could I have arrived at the same destination by a shorter route?"
[...] If AI timelines were longer I'd tell somebody, like, try that for 30 years and see what happens.
I've now been doing this exercise for a few years. I previously noted in Tuning your Cognitive Strategies that, idk, subjectively it seems to me like I am much more intellectually generative than I was before I started (separate from being better at specific individual skills). I don't think I have much more evidence about that now than I did then.
I still endorse this, but my work last year was mostly about trying to make this less necessary.
Sometimes you need to make people struggle to force some kind of breakthrough and learn something important. But, needing to do that is a skill issue.
I think two new concepts here are:
If you're good at those, you should need to Shadowmoth people less, and instead just explain what to do and why it's important and they should just get it.
(I think at least part of what's going on is that there is a separate common belief that Superintelligent (compared to the single best humans) is enough to bootstrap to Overwhelming Superintelligence, and some of the MIRI vs Redwood debates are about how necessarily true that is)