All of Lucie Philippon's Comments + Replies

Agreed that the current situation is weird and confusing.

The AI Alignment Forum is marketed as the actual forum for AI alignment discussion and research sharing. However, it seems that the majority of discussion shifted to LessWrong itself, in part due to most people not being allowed to post on the Alignment Forum, and most AI Safety related content not being actual AI Alignment research.

I basically agree with Reviewing LessWrong: Screwtape's Basic Answer. It would be much better if AI Safety related content had its own domain name and home page, with som... (read more)

3Ben Pace
Of note: the AI Alignment Forum content is a mirror of LW content, not distinct. It is a strict subset.
habryka*138

I think it would be extremely bad for most LW AI Alignment content if it was no longer colocated with the rest of LessWrong. Making an intellectual scene is extremely hard. The default outcome would be that it would become a bunch of fake ML research that has nothing to do with the problem. "AI Alignment" as a field does not actually have a shared methodological foundation that causes it to make sense to all be colocated in one space. LessWrong does have a shared methodology, and so it makes sense to have a forum of that kind. 

I think it could make se... (read more)

I did not know about this either. Do you know whether the EAs in the EU Commission know about it?

1Katalina Hernandez
Hi Lucie, thanks so much for your comment! I’m not very involved with the Effective Altruism community myself, though I did post the same Quick Take on the EA Forum today, but I haven’t received any responses there yet. So I can’t really say for sure how widely known this is. For context: I’m a lawyer working in AI governance and data protection, and I’ve also been doing independent AI safety research from a policy angle. That’s how I came across this, just by going through the full text of the AI Act as part of my research.  My guess is that some of the EAs working closely on policy probably do know about it, and influenced this text too! But it doesn’t seem to have been broadly highlighted or discussed in alignment forums so far. Which is why I thought it might be worth flagging. Happy to share more if helpful, or to connect further on this.

Thanks for the feedback! It made more sense as en event title. I'll edit it

4P. João
Thanks Lucie Philippon! Follow the video

Earlier discussion on LW on zinc lozenges effectiveness mentioned that other flavorings which make it taste nice actually prevent the zinc effect.

From this comment by philh (quite a chain of quotes haha):

According to a podcast that seemed like the host knew what he was talking about, you also need the lozenges to not contain any additional ingredients that might make them taste nice, like vitamin C. (If it tastes nice, the zinc isn’t binding in the right place. Bad taste doesn’t mean it’s working, but good taste means it’s not.) As of a few years ago, that

... (read more)

It seems that @Czynski changed the structure of the website and that entries are now stored in this folder.

Maybe you could DM him?

2Screwtape
The structure did change. I've gone ahead and added a SFLW file to reflect the new structure, using the description Andrew had for the First Saturday SFLW group. @Andrew Gaul if you want to tweak that description look for /_posts/2025-01-05-SFLW.md and change it as you need.

Just got confirmation from Effektiv Spenden (with Mieux Donner input) that having this "fiscal sponsorship" setup does not change anything to whether a foreign org can receive tax-deductible donations from France.

Seems that no amount of indirection can enable Lightcone to be tax-deductible in France, short of actually having operations in France.

7kave
Just a message to confirm: Zac's leg of the trade has been executed for $810. Thanks Lucie for those $810!

For now, it's still unclear whether donations will be tax-deductible in France. I'll contact Effektiv Spenden to check.

2Zac Hatfield-Dodds
If they're not, let me know by December 27th and I'll be happy to do the swap after all!

"Making nodes in one's head" → probably meant knots?

TL;DR: This post gave me two extremely useful handles to talk about a kind of internal struggle I've been grappling with for as long as I've been in the EA community.

This post seemed obviously true when I read it and I started reusing the concept in conversations, but it did not lead to a lot of internal changes. However, a few months later, having completely forgotten this post, I started practicing self therapy using Internal Family Systems, and then I uncovered a large conflict which after multiple sessions seemed to map really well to the two archetype... (read more)

(I only discovered this post in 2024, so I'm less sure it will stand the test of time for me)

This post is up there with The God of Humanity, and the God of the Robot Utilitarians as the posts that contributed the most to making me confront the conflict between wanting to live a good life and wanting to make the future go well.

I read this post while struggling half burnt out on a policy job, having lost touch with the fire that drove me to AI safety in the first place, and this imaginary dialogue brought back this fire I had initially found while reading HP... (read more)

Update: Seems definitely not possible to get tax deduction in France for an American organisation.

This post from Don Efficace, the organisation which was trying to set up EA regranting in France, explains what are the constraints for the French tax deduction: https://forum.effectivealtruism.org/posts/jWhFmavJ9cE2zE585/procedure-to-allow-donations-to-european-organizations-to-be

I reached out to Lucie and we agreed to swap donations: she'd give 1000€ to AMF, and I'd give an additional[1] $810 to Lightcone (which I would otherwise send to GiveWell). This would split the difference in our tax deductions, and lead to more total funding for each of the organizations we want to support :-)

We ended up happily cancelling this plan because donations to Lightcone will be deductible in France after all, but I'm glad that we worked through all the details and would have done it. update: because we're doing it after all!


  1. I think it's plau

... (read more)

My bad, I mistook Mieux Donner for an older organisation that was trying to setup this.

I checked online, and it does not seem that it's possible to get the deduction for non-profits outside the EU even through a proxy, except if their action is related to France or is humanitarian.

Source: https://www.centre-francais-fondations.org/dons-transnationaux/

7habryka
Alas, thank you for looking into it.

Completed! It was really fun. Thanks for the question to give appreciation to another LWer :)

6Screwtape
You're welcome! Last year I had a version of that question where (mimicking a question the LW team asked) I said I'd keep it private. Reading the answers felt nice, and I realized an anonymous but public version of that could be really nice for a lot of people.

I'd love to donate to Lightcone ~5K€ next year, but as long as it's not tax-deductible in France I'll keep to French AI safety orgs as the French non-profit donation tax break is stupidly good: it can basically triple the donation amount and reduce income tax to 0.

I know that Mieux Donner, a new French effective giving org, is acting as French tax-deductible front for a number of EA organisations. I'll contact them to check whether they could forward a donation to Lightcone and give an update under this comment.

8Lucie Philippon
Update: Seems definitely not possible to get tax deduction in France for an American organisation. This post from Don Efficace, the organisation which was trying to set up EA regranting in France, explains what are the constraints for the French tax deduction: https://forum.effectivealtruism.org/posts/jWhFmavJ9cE2zE585/procedure-to-allow-donations-to-european-organizations-to-be
9Lucie Philippon
My bad, I mistook Mieux Donner for an older organisation that was trying to setup this. I checked online, and it does not seem that it's possible to get the deduction for non-profits outside the EU even through a proxy, except if their action is related to France or is humanitarian. Source: https://www.centre-francais-fondations.org/dons-transnationaux/
7habryka
That would be great! Let’s hope they say yes :)

I find that by focusing on the legs of the dancer, I managed to see it oscillating: half-turn clockwise then half-turn counterclockwise with the feet towards the front. However, this always break when I start looking at the arms. Interesting

I'm currently doing the Rethink Wellbeing IFS Course, and it's given me so much understanding of myself so quickly with no diminishing returns in sight yet, that it felt like the perfect time to apply More Dakka.

Therefore, I've used this list to generate ideas of how to apply More Dakka to my internal exploration, and found 30 strategies that sound super helpful :) 

Makes sense! Someone yesterday mistakenly read it as the date of the event, so this confusion seems to happen.

2Ben Pace
I just made a PR that removes the published date from the top and shows "Posted at: <date>" at the bottom. Will probably go live today.

When I'm looking at the date, it says 10th November 2023, but underneath it says 27th September. Seems like a bug

2habryka
Hmm, that is bad UX on our side. The 10th of November 2023 thing refers to when the post was posted. IMO for events it should somehow be deemphasized (though it is important information I want to preserve)

I guess the word Mnestic was originally introduced in the popular SCP story There is no Antimemetics Division.

I expect it could be mildly valuable to index the previously existing calendars and give the best current alternative. I don't think it will bring much though.

Where is the event? There is no location information

1Nate Sternberg
Sorry!  Fixed.

This is list is aimed at people visiting the Bay area and searching how to get in contact with the local community. Currently, the Lighthaven website does not list events happening there, so I don't think it's relevant for someone who is not searching for a venue.

Possibly a larger index of rationalist resources in the Bay would be useful, including potential venues.

1Czynski
I think at normal times (when it's not filled with MATS or a con) it's possible to rent coworking space at Lighthaven? I haven't actually tried myself.

I expect that basic econ models and their consequences on the motivations of investors are already mostly known in the AI safety community, even if only through vague statements like "VCs are more risk tolerant than pension funds".

My main point in this post is that it might be the case that AI labs successfully removed themselves from the influence of investors, so that it actually matters very little what the investors of AI Labs want or do. I think that determining whether this is the case is important, as in this case our intuitions about how companies work generally would not apply to AI labs.

The link does not work.

I don't think a written disclaimer would amount to much in a court case without corresponding provisions in the corporate structure.

1utilistrutil
Better link: https://www.bloomberg.com/opinion/articles/2024-07-10/jefferies-funded-some-fake-water 

Following this post, I made 4 forecasts on the output and impact of my MATS project, which led me to realize some outcomes I expected were less likely than I felt, absent active effort on my part to make them happen.

I don't have any more information on this. DM me if you want me to check whether I can find more info.

The founders of Hugging Face are French yes, but I'm not sure how invested they are in French AI policy. I mostly did not hear about them doing any specific actions or having any specific people with influence there.

I'm glad this post came out and made me try Claude. I now find it mostly better than ChatGPT, and with the introduction of projects, all the features I need are there.

In the new UI, the estimated reading time is not visible anymore. Is it intended?

It was often useful for me. How can I tell my friends "I'll arrive in X minutes, just after reading this post" without knowing the reading time !

I consumed edible cannabis for the first time a few months ago, and it felt very similar to the experience you're describing. I felt regularly surprised at where I was, and had lots of trouble remembering more than the last 30 seconds of the conversation. 

The most troubling experience was listening to someone telling me something, me replying, and while saying the reply, forgetting where I was, what I was replying to and what I already said. The weirdest part is that at this point I would finish the reply in a sort of disconnected state, not knowing where the words were coming from, and at the end I would have a feeling of "I said what I wanted to say", even though I could not remember a word of it.

2Bolverk
I get a similar effect if sleep deprived long enough. A sense of operating on autopilot and doing things before thinking of them.
1Gunflint
I know that feeling. Did I ask my wife if she set the alarm or did I just think it? Probably better to assume I did ask otherwise if I already did she’ll be after me to quit smoking that stuff.

The main part of the issue was actually that I was not aware I had internal conflicts. I just mysteriously felt less emotions and motivation. That's the main thing all the articles I read of sustainable productivity did not transmit me, how to recognize it as it happens, without ever having my internal monologue saying "I don't want to work on this" or something.

What do you think antidepressants would be useful for? I don't expect to be matching any clinical criteria for depression.

1mesaoptimizer
Yes, I believe that one can learn to entirely stop even considering certain potential actions as actions available to us. I don't really have a systematic solution for this right now aside from some form of Noticing practice (I believe a more refined version of this practice is called Naturalism but I don't have much experience with this form of practice).
1mesaoptimizer
In my experience I've gone months through a depressive episode while remaining externally functional and convincing myself (and the people around me) that I'm not going through a depressive episode. Another thing I've noticed is that with medication (whether anxiolytics, antidepressants or ADHD medication), I regularly underestimate the level at which I was 'blocked' by some mental issue that, after taking the medication, would not exist, and I would only realize it previously existed due to the (positive) changes in my behavior and cognition. Essentially, I'm positing that you may be in a similar situation.

There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time.

This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress.

And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased.

I'... (read more)

1mesaoptimizer
Have you considered antidepressants? I recommend trying them out to see if they help. In my experience, antidepressants can have non-trivial positive effects that can be hard-to-put-into-words, except you can notice the shift in how you think and behave and relate to things, and this shift is one that you might find beneficial. I also think that slowing down and taking care of yourself can be good -- it can help build a generalized skill of noticing the things you didn't notice before that led to the breaking point you describe. Here's an anecdote that might be interesting to you: There's a core mental shift I made over the past few months that I haven't tried to elicit and describe to others until now, but in essence it involves a sort of understanding that the sort of self-sacrifice that usually is involved in working as hard as possible leads to globally unwanted outcomes, not just locally unwanted outcomes. (Of course, we can talk about hypothetical isolated thought experiments and my feelings might change, but I'm talking about a holistic relating to the world here.) Here's one argument for this, although I don't think this captures the entire source of my feelings about this: When parts of someone is in conflict, and they regularly are rejecting a part of them that wants something (creature comforts) to privilege the desires of another part of them that wants another thing (work more), I expect that their effectiveness in navigating and affecting reality is lowered in comparison to one where they take the time to integrate the desires and beliefs of the parts of them that are in conflict. In extreme circumstances, it makes sense for someone to 'override' other parts (which is how I model the flight-fight-fawn-freeze response, for example), but this seems unsustainable and potentially detrimental when it comes to navigating a reality where sense-making is extremely important.
5trevor
Upvoted! STEM people can look at it like an engineering problem, Econ people can look at it like risk management (risk of burnout). Humanities people can think about it in terms of human genetic/trait diversity in order to find the experience that best suits the unique individual (because humanities people usually benefit the most for each marginal hour spend understanding this lens). Succeeding at maximizing output takes some fiddling. The "of course I did it because of course I'm just that awesome, just do it" thing is a pure flex/social status grab, and it poisons random people nearby.

On the Spotify release, there is a typo in "First they came for the epsistemology".

2habryka
Yeah, should be fixed within the next few days. 
Answer by Lucie Philippon110

Over the last two years, I discovered LessWrong, learned about x-risks, joined the rationalist community, joined EA, started a rationalist/EA group house, and finally left my comfy high earning crypto job last September, to start working on AI safety. During this time, I definitely felt multiple switch of taking on different kinds of responsibilities. 

The first responsibility I learned, by reading HPMOR and The Sequences, was the sense that more was possible, that I could achieve greatness, become as cool as I ever wanted, but that it needed actual wo... (read more)

I was allergic to acarids when I was a child, and this caused me a severe asthma crisis when I was around 10. I live in France, and I got prescribed SLIT by the first allergy specialist my mother found, so I guess it's quite a common treatment there. I took it for more than 5 years, and now 8 years later I don't ever have any symptoms of allergy.

I filled in the survey! It was a fun way to relax this morning

Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.

I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.

I only managed to find three which fit somewhat my target:

... (read more)
4Jack O'Brien
I think this is a good thing to do! I reccomend looking up things like "reflections on my LTFF upskilling grant" for similar pieces from lesser known researchers / aspiring researchers.

blog.jaibot.com does not seem to exist anymore.

I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?

If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?

Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.

I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.

We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.

The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.

The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!

At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.

Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.

2Daniel Kokotajlo
OK, I think we are in agreement then. I think we'll be leaving the "world as normal but faster" phase sooner than you might expect -- for example, by the time my own productivity gets a 3x boost even.

I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.

Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/

They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org

I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?

This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.

However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those... (read more)

TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?

As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*r... (read more)

Load More