I did not know about this either. Do you know whether the EAs in the EU Commission know about it?
Thanks for the feedback! It made more sense as en event title. I'll edit it
Earlier discussion on LW on zinc lozenges effectiveness mentioned that other flavorings which make it taste nice actually prevent the zinc effect.
From this comment by philh (quite a chain of quotes haha):
...According to a podcast that seemed like the host knew what he was talking about, you also need the lozenges to not contain any additional ingredients that might make them taste nice, like vitamin C. (If it tastes nice, the zinc isn’t binding in the right place. Bad taste doesn’t mean it’s working, but good taste means it’s not.) As of a few years ago, that
It seems that @Czynski changed the structure of the website and that entries are now stored in this folder.
Maybe you could DM him?
See the thread here: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people?commentId=Tmra55pcMKahHyBcn
Does not look possible to have tax deductibility in France, no matter the indirection
Just got confirmation from Effektiv Spenden (with Mieux Donner input) that having this "fiscal sponsorship" setup does not change anything to whether a foreign org can receive tax-deductible donations from France.
Seems that no amount of indirection can enable Lightcone to be tax-deductible in France, short of actually having operations in France.
For now, it's still unclear whether donations will be tax-deductible in France. I'll contact Effektiv Spenden to check.
"Making nodes in one's head" → probably meant knots?
TL;DR: This post gave me two extremely useful handles to talk about a kind of internal struggle I've been grappling with for as long as I've been in the EA community.
This post seemed obviously true when I read it and I started reusing the concept in conversations, but it did not lead to a lot of internal changes. However, a few months later, having completely forgotten this post, I started practicing self therapy using Internal Family Systems, and then I uncovered a large conflict which after multiple sessions seemed to map really well to the two archetype...
(I only discovered this post in 2024, so I'm less sure it will stand the test of time for me)
This post is up there with The God of Humanity, and the God of the Robot Utilitarians as the posts that contributed the most to making me confront the conflict between wanting to live a good life and wanting to make the future go well.
I read this post while struggling half burnt out on a policy job, having lost touch with the fire that drove me to AI safety in the first place, and this imaginary dialogue brought back this fire I had initially found while reading HP...
Update: Seems definitely not possible to get tax deduction in France for an American organisation.
This post from Don Efficace, the organisation which was trying to set up EA regranting in France, explains what are the constraints for the French tax deduction: https://forum.effectivealtruism.org/posts/jWhFmavJ9cE2zE585/procedure-to-allow-donations-to-european-organizations-to-be
I reached out to Lucie and we agreed to swap donations: she'd give 1000€ to AMF, and I'd give an additional[1] $810 to Lightcone (which I would otherwise send to GiveWell). This would split the difference in our tax deductions, and lead to more total funding for each of the organizations we want to support :-)
We ended up happily cancelling this plan because donations to Lightcone will be deductible in France after all, but I'm glad that we worked through all the details and would have done it. update: because we're doing it after all!
I think it's plau
My bad, I mistook Mieux Donner for an older organisation that was trying to setup this.
I checked online, and it does not seem that it's possible to get the deduction for non-profits outside the EU even through a proxy, except if their action is related to France or is humanitarian.
Source: https://www.centre-francais-fondations.org/dons-transnationaux/
Completed! It was really fun. Thanks for the question to give appreciation to another LWer :)
I'd love to donate to Lightcone ~5K€ next year, but as long as it's not tax-deductible in France I'll keep to French AI safety orgs as the French non-profit donation tax break is stupidly good: it can basically triple the donation amount and reduce income tax to 0.
I know that Mieux Donner, a new French effective giving org, is acting as French tax-deductible front for a number of EA organisations. I'll contact them to check whether they could forward a donation to Lightcone and give an update under this comment.
I find that by focusing on the legs of the dancer, I managed to see it oscillating: half-turn clockwise then half-turn counterclockwise with the feet towards the front. However, this always break when I start looking at the arms. Interesting
I'm currently doing the Rethink Wellbeing IFS Course, and it's given me so much understanding of myself so quickly with no diminishing returns in sight yet, that it felt like the perfect time to apply More Dakka.
Therefore, I've used this list to generate ideas of how to apply More Dakka to my internal exploration, and found 30 strategies that sound super helpful :)
Makes sense! Someone yesterday mistakenly read it as the date of the event, so this confusion seems to happen.
When I'm looking at the date, it says 10th November 2023, but underneath it says 27th September. Seems like a bug
I guess the word Mnestic was originally introduced in the popular SCP story There is no Antimemetics Division.
I expect it could be mildly valuable to index the previously existing calendars and give the best current alternative. I don't think it will bring much though.
Where is the event? There is no location information
This is list is aimed at people visiting the Bay area and searching how to get in contact with the local community. Currently, the Lighthaven website does not list events happening there, so I don't think it's relevant for someone who is not searching for a venue.
Possibly a larger index of rationalist resources in the Bay would be useful, including potential venues.
I expect that basic econ models and their consequences on the motivations of investors are already mostly known in the AI safety community, even if only through vague statements like "VCs are more risk tolerant than pension funds".
My main point in this post is that it might be the case that AI labs successfully removed themselves from the influence of investors, so that it actually matters very little what the investors of AI Labs want or do. I think that determining whether this is the case is important, as in this case our intuitions about how companies work generally would not apply to AI labs.
The link does not work.
I don't think a written disclaimer would amount to much in a court case without corresponding provisions in the corporate structure.
Following this post, I made 4 forecasts on the output and impact of my MATS project, which led me to realize some outcomes I expected were less likely than I felt, absent active effort on my part to make them happen.
I don't have any more information on this. DM me if you want me to check whether I can find more info.
The founders of Hugging Face are French yes, but I'm not sure how invested they are in French AI policy. I mostly did not hear about them doing any specific actions or having any specific people with influence there.
I'm glad this post came out and made me try Claude. I now find it mostly better than ChatGPT, and with the introduction of projects, all the features I need are there.
In the new UI, the estimated reading time is not visible anymore. Is it intended?
It was often useful for me. How can I tell my friends "I'll arrive in X minutes, just after reading this post" without knowing the reading time !
I consumed edible cannabis for the first time a few months ago, and it felt very similar to the experience you're describing. I felt regularly surprised at where I was, and had lots of trouble remembering more than the last 30 seconds of the conversation.
The most troubling experience was listening to someone telling me something, me replying, and while saying the reply, forgetting where I was, what I was replying to and what I already said. The weirdest part is that at this point I would finish the reply in a sort of disconnected state, not knowing where the words were coming from, and at the end I would have a feeling of "I said what I wanted to say", even though I could not remember a word of it.
The main part of the issue was actually that I was not aware I had internal conflicts. I just mysteriously felt less emotions and motivation. That's the main thing all the articles I read of sustainable productivity did not transmit me, how to recognize it as it happens, without ever having my internal monologue saying "I don't want to work on this" or something.
What do you think antidepressants would be useful for? I don't expect to be matching any clinical criteria for depression.
There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time.
This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress.
And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased.
I'...
On the Spotify release, there is a typo in "First they came for the epsistemology".
Over the last two years, I discovered LessWrong, learned about x-risks, joined the rationalist community, joined EA, started a rationalist/EA group house, and finally left my comfy high earning crypto job last September, to start working on AI safety. During this time, I definitely felt multiple switch of taking on different kinds of responsibilities.
The first responsibility I learned, by reading HPMOR and The Sequences, was the sense that more was possible, that I could achieve greatness, become as cool as I ever wanted, but that it needed actual wo...
I was allergic to acarids when I was a child, and this caused me a severe asthma crisis when I was around 10. I live in France, and I got prescribed SLIT by the first allergy specialist my mother found, so I guess it's quite a common treatment there. I took it for more than 5 years, and now 8 years later I don't ever have any symptoms of allergy.
I filled in the survey! It was a fun way to relax this morning
Thank you for the pointer ! I found the article you mentioned, and then found the tag Postmortem & Retrospective which led me to three additional posts:
Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.
I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.
I only managed to find three which fit somewhat my target:
blog.jaibot.com does not seem to exist anymore.
I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?
If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?
Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.
I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.
We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.
The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.
The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!
At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.
Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.
I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.
Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/
They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org
I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?
This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.
However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those...
TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?
As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*r...
Agreed that the current situation is weird and confusing.
The AI Alignment Forum is marketed as the actual forum for AI alignment discussion and research sharing. However, it seems that the majority of discussion shifted to LessWrong itself, in part due to most people not being allowed to post on the Alignment Forum, and most AI Safety related content not being actual AI Alignment research.
I basically agree with Reviewing LessWrong: Screwtape's Basic Answer. It would be much better if AI Safety related content had its own domain name and home page, with som... (read more)
I think it would be extremely bad for most LW AI Alignment content if it was no longer colocated with the rest of LessWrong. Making an intellectual scene is extremely hard. The default outcome would be that it would become a bunch of fake ML research that has nothing to do with the problem. "AI Alignment" as a field does not actually have a shared methodological foundation that causes it to make sense to all be colocated in one space. LessWrong does have a shared methodology, and so it makes sense to have a forum of that kind.
I think it could make se... (read more)