Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

Sorted by
Raemon4-2

A thought about the debate: I don't usually prefer going to talks as a format for learning, and I expect a transcript to be long and meandering as you touch on cruxes that are significant to you two but not necessarily to me. 

A thing I would personally value after-the-fact is a summary of "what things did either of you learn?" (and I might generally prefer this for most debate/dialogue formats)

That all said, I think it's cool that you're holding events like these :)

Raemon30

(fixed formatting for Whispering Earring)

Raemon144

I also had a pretty similar experience. 

Raemon120

(tl;dr: I'd be pretty interested in hearing more about your research team, and what your goals and bottlenecks are)

Parts of the experiment happened. I am currently evaluating how it's gone so far. It's been ~2 years, during which I think I've put maybe 6-7 months of serious fulltime effort into the project.

I turned the ideas here into a workshop, which I ran 6 times for ~4 people on average. 

I ran 6 workshops and iterated on curricula. The workshops were not really in a form I expected to work that well (only a few days, where I think it takes a couple weeks to have a real shot at forming new habits). 

The first workshop, I charged $200, and at the time people said it was worth $800 on average, and 6 months later people said on average $1100 (but with some people saying it ended up not really be valuable). I'm currently trying to get 6-month followup data from the 2nd and 3rd workshop, but it's looking like that one is on track to get rated lower.

I'm currently mulling over whether to try and run a 1-month program that starts off with something like my workshop, where you then go on to do a month of research with frequent checkins about your metacognitive practices. It's not a slam dunk to do so, given my AI timelines and how much iteration it seems likely is necessary to get this working well. 

Some particularly relevant posts for The Story So Far:

Raemon60

A thing unclear from the interaction: it had seemed towards the end that "build a profile to figure out where the bottleneck is" was one of the steps towards figuring out the problem, and that the LLM was (or might have been) better at that part. And, maybe models couldn't solve you entire problem wholesale but there was still potential skills in identifying factorable pieces that were better fits for models.

Raemon190

So, I think I need to distinguish between "Feedbackloop-first Rationality" (which is a paradigm for inventing rationality training) and "Ray's particular flavor of metastrategy", which I used feedbackloop-first rationality to invent" (which, if I had to give a name, I'd call "Fractal Strategy"[1], but that sounds sort of pretentious and normally I just call it "Metastrategy" even though it's too vague)

Feedbackloop-first Rationality is about the art of designing exercises, and thinking about what sort of exercises apply across domains, thinking about what feedback loops will turn to out to help longterm, and which feedbackloops will generalize, etc.

"Fractal Strategy" is the art of noticing what goal you're currently pursuing, whether you should switch goals, and what tactics are appropriate for your current goal, in a very fluid way (while making predictions about those strategy outcomes).

Feedbackloop-first-rationality isn't actually relevant to most people – it's really only relevant if you're a longterm rationality developer. Most people just want some tools that work for them, they aren't going to invest enough to be inventing their own tools. Almost all my workshops/sessions/exercises are better framed as Metastrategy. 

I recently tried to give an impromptu talk that focused on Feedbackloop-first rationality (forcing myself out of a comfort zone of talking about practical metastrategy), and it floundered and sputtered and I eventually pivoted to making it a demo of fractal strategy that worked much better. 

This is probably mostly because I just didn't prepare for the talk. But I think it's at least partly because I'd previously been conflating them, and also, that I don't really know that many people who feel like the target audience for feedbackloop-rationality itself.

  1. ^

    But it sort of defies having a name because it involves fluidly switching between so many modes and tactics, it's hard to pin down what the central underlying move is. The "fractal" part is just one 

Raemon40

Yeah this one has been pretty high on my list (or, a fairly similar cluster of ideas)

Raemon60

Nod, this feels a bit at the intersection of what I had in mind with "Cyborgism", and the "Schleppy work in narrow domains" section.

Some thoughts: for this sort of thing, there's a hypothesis ("making it easier to change representations will enable useful thinking in hmath", and a bunch of annoying implementation details you need to test the hypothesis (i.e. actually getting an LLM to do all that work reliably).

So my next question here is "can we test out a version of this sort of thing powered by some humans-in-a-trenchcoat", or otherwise somehow test the ultimate hypothesis without having to build the thing." I'm curious for your intuitions on that

Raemon40

I think there's a possibility for ui people to make progress on the reputation tracking problem by virtue of tight feedback loops relative to people thinking more abstractly about it.

Are there particular reputation-tracking-problems you're thinking of? (I'm sure there are some somewhere, but I'm looking to get more specific)

I'm working on a poweruser LLM interface but honestly it's not going to be that much better than Harpa AI or Sider.

Raemon*50

Curated. This post's framing resonated a lot with my own framing. I think the questions of how to cultivate impact, agency and taste are some of the more important questions that LessWrong tackles.

Much of this post were phenomena I'd observed myself, but, a few particular framings stood out to me as helpful crystallizations:

The first was "Don’t rely too much on permission or encouragement." I think a few Lightcone employees have also been slowly learning something along these lines. Our CEO has a lot of taste and vision, but sometimes one of us comes up with an idea that doesn't immediately resonate with him, and it's not until we actually build some kind of prototype ourselves that other people start to believe in it.

Another was:

Unfortunately, I am here to tell you that, at least if you are similar to me, you will never feel smart, competent, or good at things; instead, you will just start feeling more and more like everyone else mysteriously sucks at them.

For this reason, the prompt I suggest here is: what does it seem like everyone else is mysteriously bad at? That’s probably a sign that you have good taste there.

I had heard this sort of idea before, but this was the first time I parsed it as a technique you could explore on purpose. (i.e. actively investigate what things people seem mysteriously bad at, and use that to guide where you can maybe trust your taste more).

Finally:

The first domain that I got some degree of taste in was software design, and I remember a pretty clear phase transition where I gained the ability to improve my designs by thinking harder about them. After that point, I spent a lot of time iterating on many different design improvements—most of which I never implemented because I couldn’t come up with something I was happy enough with, but a few of which turned into major wins.

Recently I've been exploring the "think real hard about things" paradigm. This paragraph helped flesh out for me that there are some prerequisites for "think real hard" to work. I think some of this are skills that work across domains (i.e. "notice when you can't actually explain something very clearly" -> "you're still confused, try to break the confusion down"). But, it makes sense that some of it is domain specific. 

There's an important question I previously would have framed: "If you're tackling an unfamiliar domain, the question is 'how much mileage can you get from general, cross-domain reasoning skills?". But, a different question is "what's the minimum amount of 'domain specific' skill you need in order for 'think about it in advance' to really help?".

Load More