Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

Sorted by
Raemon40

mm. I feel some kind of dissatisfied with the naming situation but it's (probably?) not actually important. I agree wizard feels righter-in-those-cases but wronger in some other ones.

Although, I think I'm now tracking a bit more subtlety here than I was before. 

A distinction here is "ability to turn knowledge into stuff-happening-in-the-world", and "ability to cause stuff happening in the world." Does a very strong or dextrous person have more X-power than a weaker/clumsier person, all else equal? (I think your answer is "yes", but for purposes of the-lacking-in-your-soul there's an aesthetic that routes more through knowledge?)

Raemon72

I like this but from a standpoint of geek thematic pedantry, I think a more appropriate trope is ‘Artificer Power’. 

I do realize the third syllable cuts into the usability a bunch. But, c’mon, wizards are not about welding or sewing. 

Raemon71

A thought about the debate: I don't usually prefer going to talks as a format for learning, and I expect a transcript to be long and meandering as you touch on cruxes that are significant to you two but not necessarily to me. 

A thing I would personally value after-the-fact is a summary of "what things did either of you learn?" (and I might generally prefer this for most debate/dialogue formats)

That all said, I think it's cool that you're holding events like these :)

Raemon30

(fixed formatting for Whispering Earring)

Raemon144

I also had a pretty similar experience. 

Raemon120

(tl;dr: I'd be pretty interested in hearing more about your research team, and what your goals and bottlenecks are)

Parts of the experiment happened. I am currently evaluating how it's gone so far. It's been ~2 years, during which I think I've put maybe 6-7 months of serious fulltime effort into the project.

I turned the ideas here into a workshop, which I ran 6 times for ~4 people on average. 

I ran 6 workshops and iterated on curricula. The workshops were not really in a form I expected to work that well (only a few days, where I think it takes a couple weeks to have a real shot at forming new habits). 

The first workshop, I charged $200, and at the time people said it was worth $800 on average, and 6 months later people said on average $1100 (but with some people saying it ended up not really be valuable). I'm currently trying to get 6-month followup data from the 2nd and 3rd workshop, but it's looking like that one is on track to get rated lower.

I'm currently mulling over whether to try and run a 1-month program that starts off with something like my workshop, where you then go on to do a month of research with frequent checkins about your metacognitive practices. It's not a slam dunk to do so, given my AI timelines and how much iteration it seems likely is necessary to get this working well. 

Some particularly relevant posts for The Story So Far:

Raemon60

A thing unclear from the interaction: it had seemed towards the end that "build a profile to figure out where the bottleneck is" was one of the steps towards figuring out the problem, and that the LLM was (or might have been) better at that part. And, maybe models couldn't solve you entire problem wholesale but there was still potential skills in identifying factorable pieces that were better fits for models.

Raemon190

So, I think I need to distinguish between "Feedbackloop-first Rationality" (which is a paradigm for inventing rationality training) and "Ray's particular flavor of metastrategy", which I used feedbackloop-first rationality to invent" (which, if I had to give a name, I'd call "Fractal Strategy"[1], but that sounds sort of pretentious and normally I just call it "Metastrategy" even though it's too vague)

Feedbackloop-first Rationality is about the art of designing exercises, and thinking about what sort of exercises apply across domains, thinking about what feedback loops will turn to out to help longterm, and which feedbackloops will generalize, etc.

"Fractal Strategy" is the art of noticing what goal you're currently pursuing, whether you should switch goals, and what tactics are appropriate for your current goal, in a very fluid way (while making predictions about those strategy outcomes).

Feedbackloop-first-rationality isn't actually relevant to most people – it's really only relevant if you're a longterm rationality developer. Most people just want some tools that work for them, they aren't going to invest enough to be inventing their own tools. Almost all my workshops/sessions/exercises are better framed as Metastrategy. 

I recently tried to give an impromptu talk that focused on Feedbackloop-first rationality (forcing myself out of a comfort zone of talking about practical metastrategy), and it floundered and sputtered and I eventually pivoted to making it a demo of fractal strategy that worked much better. 

This is probably mostly because I just didn't prepare for the talk. But I think it's at least partly because I'd previously been conflating them, and also, that I don't really know that many people who feel like the target audience for feedbackloop-rationality itself.

  1. ^

    But it sort of defies having a name because it involves fluidly switching between so many modes and tactics, it's hard to pin down what the central underlying move is. The "fractal" part is just one 

Raemon40

Yeah this one has been pretty high on my list (or, a fairly similar cluster of ideas)

Raemon60

Nod, this feels a bit at the intersection of what I had in mind with "Cyborgism", and the "Schleppy work in narrow domains" section.

Some thoughts: for this sort of thing, there's a hypothesis ("making it easier to change representations will enable useful thinking in hmath", and a bunch of annoying implementation details you need to test the hypothesis (i.e. actually getting an LLM to do all that work reliably).

So my next question here is "can we test out a version of this sort of thing powered by some humans-in-a-trenchcoat", or otherwise somehow test the ultimate hypothesis without having to build the thing." I'm curious for your intuitions on that

Load More