Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)

Comments

Raemon3d20

Uh I do think it's not obviously good (and, in fact, I'd lean bad) to be opensourced for this sort of thing.

Raemon3d40

I hadn't known Replika started out with this goal. Interesting.

It is especially pity that his digital twin lived less than his biological original, who died at 32

Not exactly the main point, but I'd probably clock this in terms of number of conversational inputs/outputs (across all users). Which might still imply "living less long"*, but less so than if you're just looking at wallclock time.

*also obviously an oldschool chatbot doesn't actually count as "living" in actually meaningful senses. I think modern LLMs might plausibly.

Raemon4d236

Yesterday I was at a "cultivating curiosity" workshop beta-test. One concept was "there are different mental postures you can adopt, that affect how easy it is not notice and cultivate curiosities."

It wasn't exactly the point of the workshop, but I ended up with several different "curiosity-postures", that were useful to try on while trying to lean into "curiosity" re: topics that I feel annoyed or frustrated or demoralized about.

The default stances I end up with when I Try To Do Curiosity On Purpose are something like:

1. Dutiful Curiosity (which is kinda fake, although capable of being dissociatedly autistic and noticing lots of details that exist and questions I could ask)

2. Performatively Friendly Curiosity (also kinda fake, but does shake me out of my default way of relating to things. In this, I imagine saying to whatever thing I'm bored/frustrated with "hullo!" and try to acknowledge it and and give it at least some chance of telling me things)

But some other stances to try on, that came up, were:

3. Curiosity like "a predator." "I wonder what that mouse is gonna do?"

4. Earnestly playful curiosity. "oh that [frustrating thing] is so neat, I wonder how it works! what's it gonna do next?"

5. Curiosity like "a lover". "What's it like to be that you? What do you want? How can I help us grow together?"

6. Curiosity like "a mother" or "father" (these feel slightly different to me, but each is treating [my relationship with a frustrating thing] like a small child who is bit scared, who I want to help, who I am generally more competent than but still want to respect the autonomy of."

7. Curiosity like "a competent but unemotional robot", who just algorithmically notices "okay what are all the object level things going on here, when I ignore my usual abstractions?"... and then "okay, what are some questions that seem notable?" and "what are my beliefs about how I can interact with this thing?" and "what can I learn about this thing that'd be useful for my goals?"

Raemon7d20

Wow the joke keeps being older.

Raemon9d40

That's actually not (that much of) a crux for me (who also thinks it's mildly manipulative, but, below the threshold where I feel compelled to push hard for changing it).

Raemon11d148

Curated.

I do sure wish this question had easier answers, but I appreciate this post laying out a lot of the evidence.

I do have some qualms about the post, in that while it's pretty thorough on the evidence re: seed oils, it sort of handwavily assumes some other nutrition stuff about processed foods that (I'm willing to bet) also have highly mixed/confusing evidence bases. But, still thought the good parts of the post were good enough to be worth curating.

Raemon11d20

I'm trying to decide whether to rename this post "Metastrategy Workshop." Fractal Strategy happened to make sense for the skillset I had put together at the time but I don't know that it's what I'm going to stick with.

Raemon11d20

One thing to remember is I (mostly) am advocating playing each game only once, and doing a variety of games/puzzles/activities, many of which should just be "real-world" activities, as well as plenty of deliberate Day Job stuff. Some of them should focus on resource management, and some of that should be "games" that have quick feedback loops, but it sounds like you're imagining it being more focused on the goodhartable versions of that than I think it is.

(also, I think multiplayer games where all the information is known is somewhat an antidote to these particular failure modes? even when all the information is known, there's still uncertainty about how the pieces combine together, and there's some kind of brute-reality-fact about 'well, the other players figured it out better than you')

Raemon14d60

Curated. (In particular recommending people click through and read the full Scott Alexander post)

I've been tracking the Rootclaim debate from the sidelines and finding it quite an interesting example of high-profile rationality. 

I have a friend who's been following the debate quite closely and finding that each debater, while flawed, had interesting points that were worth careful thought. My impression is a few people I know shifted from basically assuming Covid was probably a lab-leak, to being much less certain.

In general, I quite like people explicitly making public bets, and following them up with in-depth debate.

Raemon14d30

What would a "qualia-first-calibration" app would look like?

Or, maybe: "metadata-first calibration"

The thing with putting probabilities on things is that often, the probabilities are made up. And the final probability throws away a lot of information about where it actually came from.

I'm experimenting with primarily focusing on "what are all the little-metadata-flags associated with this prediction?". I think some of this is about "feelings you have" and some of it is about "what do you actually know about this topic?"

The sort of app I'm imagining would help me identify whatever indicators are most useful to me. Ideally it has a bunch of users, and types of indicators that have been useful to lots of users can promoted as things to think about when you make predictions.

Braindump of possible prompts:

– is there a "reference class" you can compare it to?

– for each probability bucket, how do you feel? (including 'confident'/'unconfident' as well as things like 'anxious', 'sad', etc)

– what overall feelings do you have looking at the question?

– what felt senses do you experience as you mull over the question ("my back tingles", "I feel the Color Red")

...

My first thought here is to have various tags you can re-use, but, another option is to just do totally unstructured text-dump and somehow do factor analysis on word patterns later?

Load More