If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here.
I think debating is the best way to learn. I’ve always been somewhat cynical and skeptical and a critical thinker my whole life, so I question most things. Debating works better for me as a learning tool because I can’t be simply fed information like is done in public schools. I have to try to poke holes in it and then be convinced that it still holds water.
As for what I asked Claude, he actually recommended LW to me about 3 different times on 3 different occasions. I collaborate with him to refine my ideas/plans and he recommended finding human collaborators to help execute them here, Astral Codex, and effective altruism groups. The first time he described LW as a “rationalist” group and I mistook what that meant due to my philosophy background and was thinking “you mean like fans of Decarte and Kant?” and wasn’t very impressed (I consider myself more epistemically empiricist than rationalist). The second time I actually looked into it since he mentioned it more than once and realized that the word “rationalist” was being used differently than I thought. The third time I decided to pull the trigger and started reading the sequences and then made the intro post. So far, I haven’t read anything terribly new, but it’s definitely right up my alley. I’d already gotten to that type of methodological thinking by reading authors such as Daniel Kahneman, Karl Popper, and Nassim Taleb, or I would be enthralled, but I am really glad there is an internet community of people who think like that.
That said, I know AI safety is the hot topic here right now, and I am tech savvy but far from an AI expert. I find AI to already be incredibly useful in its current form (mostly LLMs). They are quite imperfect, but they still do a ton of sloppy thinking in a very short time that I can quickly clean up and make useful for my purposes so long as I prompt them correctly.
However, I think I have a lot to contribute to AI safety as well because much of the AI savior/disaster razor is hinging on social science problems. IMO, social sciences are very underdeveloped because few, if any people have looked at the biggest problems in ways which they could realistically be solved and/or are/were capable of imagining/designing social systems which would functionally modulate behaviors within social groups, are robust from being gamed by tyrants and antisocial personalities, have a non-catastrophic risk profile, and have any realistic chance of being implemented within the boundaries of current social systems. I believe I am up to the challenge (at least in the U.S.), but my first task is to convince a group of people with the right skills and mindsets to collaborate and help me pull it off. It will also take a lot of money for a startup that needs to be raised via crowdfunding so there aren’t any domineering interests. When I asked Claude where I might even begin to look for such help, he suggested here as the top choice 3 different times.
Whether it works out that way or not, I am glad I found LW. I only have my family, normies, and internet trolls to discuss serious topics with otherwise, and that gets exhausting.