In a parallel universe with a saner civilization, there must be tons of philosophy professors workings with tons of AI researchers to try to improve AI's philosophical reasoning.
Sanskrit scholars worked for generations to make Sanskrit better for philosophy
Thank you. We just had some writers join, who're, among other things, going to make an up to date About Us section. some out of date stuff is available on https://aiplans.substack.com
Something that we use internally is: https://docs.google.com/document/d/1wcVlWRTKJqiXOvKNl6PMHCBF3pQItCCcnYwlWvGgFpc/edit?usp=sharing
We're primarily focused on making a site rebuild atm, which has a lot of new and improved features users have been asking for. Preview (lots of form factor stuff broken atm) at: https://ai-plans-site.pages.dev/
Ok, so are these not clickbait then?
"Stop This Train, Win a Lamborghini"
My Clients, The Liars
And All The Shoggoths Merely Players
Acting Wholesomely
These are the most obvious examples. By 'clickbait', here I mean a title that's more for drawing in readers than accurately communicating what the post is about. Doesn't mean it can't be accurate too - after all, MrBeast rarely lies in his video titles - but it means that instead of choosing the title that is most accurate, they chose the most eye catching and baiting title out of the pool of accurate/semi-accurate titles.
Week 3: How hard is AI alignment?
Seems like something important to be aware of, even if they may disagree.
Ah, sorry, here's the link! https://docs.google.com/spreadsheets/d/1uXzWavy1mS0X-uQ21UPWHlAHjXFJoWWlN62EyKAoUmA/edit?usp=sharing
Thank you for pointing that out, also added it to the post!
if asked about recommendation algoritms, I think it might be much higher - given a basic understanding of what they are, addictiveness, etc