If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here.
Hello all, and thank you to everyone who helps provide this space. I am glad to have discovered LW. My name is Benjamin. I am a philosopher and self guided learner. I just discovered LW a short while ago and I am reading through the sequences. After many years of attempting to have productive conversations to solve problems and arrive at the truth via social media groups (which is akin to bludgeoning one’s head against the wall repeatedly), I gave up. I was recently recommended to join LW by Claude AI, and it seems like a great recommendation so far.
One of the things that I find discouraging about modern times is the amount of outright deception that is tolerated. Whether it is politics, business, institutions of science, interpersonal relationships, or even lying to oneself, deception seems to be king in our modern environment. I am a systemic thinker, so this seems like a terrible system to me. The truth is a better option for everyone but not as rewarding as deception on an individual actor level, and thus we have entered a prisoner’s dilemma situation where most actors are defectors.
I am interested in answering two questions related to this situation:
I like the chances of solving this problem with AI, but I think government and corporations are going to try to centralize control of AI and prevent this from happening because both institutions mainly subsist on deception. I believe we are standing on a razors edge between true democracy and freedom and a centralized totalitarian oligarchy which largely depends on how things shake out with control over AI. I am a decentralist philosophically. I strongly believe in true democracy as opposed to false democracy as an applause light as it was aptly described in the sequences. I am in the process of writing a book on how to gain true democracy in the United States because I believe that the future of the world hinges on whether or not this can be accomplished.
I am also very open to counter-arguments. I have no desire whatsoever to cling to false beliefs, and I am happy to lose a debate because it means I learned something and became smarter in the process. In this sense, the loser of a debate is the real winner because they learned something while the winner only spent their time and energy correcting their false belief. However, winning has its own benefits in the form of a dopamine rush, so it is a positive sum game. I wish everyone had this attitude. Just know that if you can prove I am wrong about something, I won’t retreat into cognitive dissonance. Instead, I will just update my opinion(s).
I have a large number of ideas on how to affect positive change which I will be posting about, and any critical feedback or positive feedback is welcome. Thanks to everyone who contributes to this space, and I hope to have many cooperative conversations here in the future.
I think it’s not only nice, but a necessary step for reducing information asymmetry which is one of the greatest barriers to effective democratic governance. Designing jargon terms to benefit more challenged learners would carry vastly more benefit than designing them to please adept learners. It wouldn’t harm the adept learners in any significant way (especially since it’s optional), but it would significantly help the more challenged learners. Many of my ideas are designed to address the problem of information asymmetry by improving learning and increasing transparency.