Camille Berger

Paris (Laplace)
Conversational Rationality, Cyborgism, AIS via Debate, Translating between philosophical traditions. 
I sometimes write poems.

Wiki Contributions

Comments

Sorted by
Answer by Camille Berger 32

A few people referred to anaxithemia or overcoming it, I think most people don't realize how precise most expressions around feelings are.

"My arms are falling" is an expression in french to explain that you're shocked. I experienced myself my arms becoming impossible to move, as if filled with concrete, after going through some relational shocks (the same is true of "being blinded by X", some extremely intense emotions have literally made me blind for a few secs)

While I'm at it, some mental shocks literally feel like a physical shock! One of those felt for me like an egg being broken against my skull.

"Making nodes in one's head" means overthinking something. "Untying things" means getting helpful insights. However, it's literally what I went through during therapy. There is a literal feeling of untying an invisible "force field", and those nodes are almost always correlated with mental schemes that are uselessly complex. Some people are genuinely worried that you could actively harm your own mental health through overthinking, they're not just finding an excuse for switching topics!

"Vibes" and "vibe" are extremely concrete things for people who got into very special states of consciousness. The french equivalent for that, "ondes", felt so radio-communication related I thought it had to be some telepathy pseudoscience BS. Actually, people are talking about components of subjective perceptions, and some of those (e.g. color, or mood) literally feel/behave like waves when under altered consciousness, and engage in resonance effects as well. To the detriment of the image, however, there seems to be a real contingent that extends this observation to "and we can use them to do telepathy or influence fate".

Just discovered an absolute gem. Thank you so much.

Informative feedback! Though, I'm sorry if it wasn't clear, I'm not talking about this list -the post I linked is more like inner documentation for people working in this space and I though the OP and similarly engaged people could benefit from knowing about it, I don't think it's "underrated" in any way (I'm still learning something out of your comment, though, so thanks!)

What I meant was that I noticed that posts that present the projects in detail (e.g. Announcing the Double Crux Bot) tend to generate less interest than this one, and it's a meaningful update for me -I think I didn't even realize a post like this was "missing".

Related: https://www.lesswrong.com/posts/vcuBJgfSCvyPmqG7a/list-of-collective-intelligence-projects

I had never thought about approaching this topic from the abstract, but I'm judging from the karma that this is actually what people want, rather than existing projects.

I'm surprised! I thought people were overall disinterested about this topic, but it seems more like the problem itself hadn't been stated to start with.

Hi! Thank you for writing this comment. I understand it can be a bit worrying to feel like your points might not be understood, but I'll give it a try nonetheless. I really genuinely want to fix any serious flaw in my approach.

However, I find myself in a slightly strange situation. Part of your feedback is very valuable. But I also believe that you misunderstood part of what I was saying. I could apply the skills I described in the post on your comment as a performative example, but I'm sensing that you could see it as a form of implied sarcasm, and it'd be unethical, so I'll refrain from doing that. There is a last part of me that just feels like your point is "part of this post is poorly written". I've made some minor edits in the hope that it accomodates your criticism.

My suggestion would be for you to watch real-life examples of the techniques I promote (say https://www.youtube.com/watch?v=d2WdbXsqj0M and https://www.youtube.com/watch?v=_tdjtFRdbAo ) then comment on those examples instead.

Alternatively, you can just read my answers: 

Rephrasing is often terrible; 

Agree, I've added the detail on "genuinely asking your interlocutor if this is what they mean, and if not, feel free to offer a correction" (e.g. "If I got you right, and feel free to correct me if I didn't.... "). I think that this form makes it almost always a pleasant experience and I somehow forgot this important detail.

Your suggestion for attacking personal experience [...]

You're referring to point 4, not 5, right ? 
If yes, I think this is extrapolating beliefs I don't actually have. I admit however I didn't choose a good example, you can refer to the Street Epistemology video above for a better one. 

I'll replace the example soonish. In the mean time, please note that I do not suggest to "attack" personal experiences. I suggest to ask "What helps us distinguish reliable personal experiences from unreliable ones ?". This is a valid question to ask, in my view. For a bunch of reasons, this question has more chances to bounce off, so I prefer to ask "How do you distinguish personal experiences from [delusions]?", where "[delusions]" is a term that has been deliberately imported by the conversation partner. I think most interlocutors will be tempted to answer something in the lines of intersubjectivity, repeatability or empirical experiments. But I agree this is a delicate example and I'd better off pointing to something else.

Stories need to actually be short, clear, and to the point or they just confuse the matter more. 

This was part of the details I was omitting. I'll add it.

Caring about their underlying values is useful, but it needs to be preceeded by curiousity about and understanding of, or it does no good.

Agree. This was implied in several parts of the post, i.e "Be genuinely truth-seeking" in the ethical caveats. But I don't think it is that hard.

A working definition may or may not be better than a theoretical one.

Please note that I'm talking about conversations that happen between rationalists and non-rationalists on entry-level arguments. E.g. "We can't lose control of AI because it's made of silicon", not "Davidad has a promising alignment plan" (please note that I'm not making the argument to apply these techniques to AI Safety Outreach and Advocacy, this is just an example). I think we really should not spend 15 minutes with someone not acquainted with LessWrong or even AI to define "losing control" in a way that is close to mathematically formal. I think that "What do you mean with losing control? Do you mean that, if we ask to do something specific, then it won't do it? Or do you mean something else?" is a good enough question. I'd rather discuss the details when the said person is more acquainted with the topic.

There will, of course, be situations where this isn't true. Law of equal and opposite advice applies. But in most entry-level arguments, I'd rather have people spend less time problematizing definitions as opposed to asking to their interlocutor what are their reasons.

People don't generally use Bayes rule!

Of course. I'm not suggesting to mention Bayes' Rule out loud. Nor am I suggesting people actually use Baye's Rule in their everyday life. I'm noting that techniques I think are more robust are the ones that lead people to apply an approximation thereoff, usually by contrasting one piece of evidence under two different hypotheses. The reference to 'Bayes' comes from Bayesian psychology of reasoning, my model is closest to the one described in The Erotetic Theory of reasoning (https://web-risc.ens.fr/~smascarenhas/docs/koralus-mascarenhas12_erotetic_theory_of_reasoning.pdf) 

Something said in point 8 seems like the key.

It is the key, I thought I hade made it clear with "Yet the mindset itself is the key". 
However I don't want to make a post on it without explaining the ways in which it manifests, because healing myself made no sense, up until I started analyzing the habits of healed people. Some people who were already healed didn't want to "give the secrets away" or scoughed at my attempts. They came up to me as snob and preventing me to actually learn, I actually really got a lot out of noting down recurrent patterns in their conversations, if only because it allowed me to do Deliberate Practice.

Finally, please remember that this post is an MVP. It is not meant to be exhaustive and cover all the nuances of the techniques -it's just that I'd rather write a post than nothing at all, and the entire sequence will take time before publication.

If you feel like I completely misunderstood your points, and are open to have my skills applied to our very conversation, feel free to DM me a calendly link and we can sort it out live. I'd describe myself as a good conversation partner and I would put quite low the probability for the exchange to go awry.

PS: It would help me out if you could quote the [first sentence of the] parts you are reacting to, in order to make clear what you are talking about. I hope I'm right in understanding what parts of the post you are reacting to. 

Workshops: 
https://deepcanvass.org/ organizes introductions to Deep Canvassing regularly. My personal take is that the workshop is great, but I don't find it entirely aligned with a truth-seeking attitude (it's not appalling either), and I would suggest rationalists to bring it their own twist.
https://www.joinsmart.org/ also organizes workshops who often vary in theme. Same remark as above.
There is a discord server accessible from https://streetepistemology.com/, they organize regular practices sessions. 
Motivational Interviewing and Principled Negotiation are common enough for you to find a workshop near where you live, I guess.

There's also the elephant in the room -my own eclectic workshop, which mostly synthesizes all of the above with (I believe) a more rationalist orientation and stricter ethics.

Someone told me about people in the US who trained on "The Art of Difficult Conversations", I'd be happy to have someone leave a reference here! If you're someone who's used to coaching for managing disagreements, feel free to drop your services below as well.

I read these comments a few days ago. It prompted me to try applying something inspired by what was written in the post, but immediately on my muscle tension: I slightly Focus on it, then tell myself to "side with" the tension / feeling, while also telling myself that it's Ok to do so, not trying to "bust" it or put it into words, and using chipmonk's technique (cf his blog) to explore resistance around being seen displaying "the underlying emotion".

I have the very clear impression that it weakens the tension quite fast (just timed it, it took about 30 seconds). I'm not having any insight on what the tension was about specifically.

That's purely subjective experience report, might be heavily biased.

Side comment from someone who knows a thing or two in psychology of argumentation:

1-I think that including back-and-forth in the argument (e.g. LLM debates or consulting) would have a significant effect. In general argumentation in-person vs on exposure showcases drastic différences.

2-In psychology of reasoning experiments, we sometimes observe that people are very confused about updates in "probability" (see -70% engineers, 30% lawyer, I pick someone at random. What's the probability it's one or the other -Fifty-fifty) I wouldn't be surprised if the results were different if you said "On a 0 to 10 scale where 0 is... And 10 is... Where do you stand".

3-The way an argument is worded (claim first, data second, example third vs example first, data second, claim third) also has an impact, at least according to argumentation theory. In particular I recall that there were no concrete examples given in the arguments (e.g. "Bad llama", "Devin", "Sydney", etc) which can give the impression of an incomplete argument.

Note: This might be me who is not well-informed enough on this particular initiative. However, at this point, I'm still often confused and pessimistic about most communication efforts about AI Risk. This confusion is usually caused by the content covered and the style with which it is covered, and the style and content here does not seem to veer off a lot from what I typically identify as a failure mode.

I imagine that your focus demographic is not lesswrong or people who are already following you on twitter. 
I feel confused. 

Why test your message there? What is your focus demographic? Do you have a focus group? Do you plan to test your content in the wild? Have you interviewed focus groups that expressed interest and engagement with the content?

In other words, are you following well grounded advice on risk communication? If not, why?

I feel some worry when reading your comment on stereotypes.

I think that what I have depicted here gestures at vague axes in a multidimensional space, and I sort of expect that people can see which coordinate they're closer to and, mainly, realize that others might be at a different location, one they still need to inquire on. I hope them to adopt a certain gentleness and curiosity in aknowledging that someone might have a different perspective on rationality, and I hope that they will not try to label people out loud.

I'm always a bit worried when naming things, because people seem to associate categories with "boxes" or "boundaries" rather than "shores of vast and unknown territories".

Load More