Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

Sorted by
Raemon30

Curated. Like others, I found this a good simpler articulation of the concept. I appreciated the disclaimers around the 

One thing I got from this post, which for some reason I hadn't gotten from previous posts, was the notion that "to what degree am I in a simulation?" may be situation-dependent. i.e. moments where I'm involved with historically important things might be more simulationy, other times less so. (Something had felt off about my previous question of "do more 'historically important' people have a more-of-their-measure in simulations?", and the answer is maybe still just "yes", but somehow it feels less magical and weird to ask "how likely is this particular moment to be simulated?")

Something does still feel pretty sus to me about the "during historically significant moments, you might be more likely to see something supernatural-looking afterwards" (esp. if you think it should be appear in >50% of your reality-measure-or-whatever). 

The "think in terms of expected value" seems practically useful but also... I dunno, even if I was a much more historically significant person, I just really don't expect to see Simulationy Things. The reasoning spelled out in the post didn't feel like it resolved my confusion about this. 

(independent of that, I agreed with Richard's critique of some of the phrasing in the post, which seem to not quite internalize the claims David was making)

Raemon40

It doesn't actually say one-way-or-another in the creation screen (in the chrome-extension tool at least). So, uh, let's see!

⚖ Other people will be able to see my prediction on this question before making a prediction (Raymond Arnold: 45%)

Raemon30

(My own answer is that if like >75% of people agreed on what consciousness means, I'd be like "okay yeah Critch's point isn't super compelling". If it was between like 50 - 75% of people I'd like "kinda edge case." If it's <50% of people agreeing on consciousness, I don't think it matters much what definition is "correct.")

Raemon30

I don't feel very hopeful about the conversation atm, but fwiw I feel like you are missing a fairly important point while being pretty overconfident about not having missed it. 

Putting a different way: is there a percent of people who could disagree with you about what consciousness means, which would convince you that you it's not as straightforward as assuming you have the correct definition of consciousness, and that you can ignore everyone else? If <50% of people agreed with you? If <50% of the people with most of the power? 

(This is not about whether your definition is good, or the most useful, or whatnot – only that, if lots of people turned out to be mean different things by it, would it still particularly matter whether your definition was the "right" one?)

Raemon73

I think many of the things Critch has listed as definitions of consciousness are not "weak versions of some strong version", they're just different things.

You bring up a few times that LLMs don't "experience" [various things Critch lists here]. I agree, they pretty likely don't (in most cases). But, part of what I interpreted Critch's point here to be was that there are many things that people mean by "consciousness" that aren't actually about "experience" or "qualia" or whatnot. 

For example, I'd bet (75%) that when Critch says they have introspection, he isn't making any claims about them "experiencing" anything at all – I think he's instead saying "in the same way that their information processing system knows facts about Rome and art and biology and computer programming, and can manipulate those facts, it can also know and manipulate facts about it's thoughts and internal states." (whereas other ML algorithms may not be able to know and manipulate their thoughts and internal states)

Purposefulness: Not only irrelevant to consciousness but...

A major point Critch was making in previous post is that when people say "consciousness", this is one of the things they sometimes mean. The point is not that LLMs are conscious the way you are using the word, but that when you see debates about whether they are conscious, it will include some people who think it means "purposefulness."

Raemon30

I think LessWrong used to be more like this. It certainly had some upsides to "have culture". There are also downsides. I certainly miss it but not sure whether I'd hit a button to change it.

Raemon20

I'm not sure what past-you meant here, but, one thing you might think is "the amount of hurdles you have to jump through to profit off drugs is 'hard', i.e. you (unnecessarily) need to be very well funded and well connected company that can navigate bureaucratic hurdles", and it's not that you can't do it. It's just, like, "hard", ya know?

Raemon52

Oh to be clear I don’t think it was bad for you to post this as-is. Just that I’d like to see more followup

Raemon148

This post seems important-if-right. I get a vibe from it of aiming to persuade more than explain, and I'd be interested in multiple people gathering/presenting evidence about this, preferably at least some of them who are (currently) actively worried about China.

Raemon64

I've recently made a pull-request (not quite ready to merge yet) that gives LessWrong Fatebook hoverovers (which are different from embeds. I'm considering also making embeds, although I think the UI takes up a bit too much space by default).

I am into "more Fatebook integration everywhere".

(I think individual FB questions can toggle whether to show/hide predictions before you've made your own)

Load More