I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
Most of the people I heard from at the afterparty had not been to Solstice. Several didn't even know what it was.
Interesting. Could be good to have spaces that are for Solstice-attendees-only, if this were to continue to be the case.
Nothing in stone yet, but we're currently considering running it again within the next 6 months.
Not gonna link because I don't want people filling it out, but 80% of the determination was from the question "Please link to 1-5 pieces of writing of yours (public or private) that you think are your best pieces of writing. Something you're proud of, or that shows what you're capable of. For each, please give a brief description of what it is."
I think your read of Habryka's reply is mistaking a for-all quantifier and a there-exists quantifier. Insofar as you're saying "never use this information to harm the interests of the third party" Habryka is saying "no; here is an instance I would want to share it that seems reasonable to me that does involve something the third party might find harmful". This is distinct from "no; if I ever find a situation where I can use this info to harm the third-party, I will use the info to do that".
Nominating this for the 2024 Review. +9. This post has influenced me possibly the most of any LessWrong post on 2024, and I think about it many times per month. Basically seems like there was a whole part of human psychology that I was not modeling before this when people talked about what they believed in, except as people failing to have beliefs as maps of the territory (as opposed to things-to-invest-in). It helped me notice that there were things in the world that I believed-in in this sense but had not been allowing myself to notice, and has been a major boost to my motivation to do things that I care about and find meaningful.
Gotcha. To be clear I didn't read you as requesting a change; this was written primarily for "all the readers" to have more contact with reality, than to challenge anything you wrote.
I don't know what you mean by "the prompt, in a significant sense, is the post". When I ask ChatGPT "What are some historical examples of mediation ending major conflicts?" that is really very different information content than the detailed list of 10 examples it gives me back.
I'm just gonna copy-paste my comment from yesterday's discussion, so that people have concrete examples of what we're dealing with here.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here's 12 from the last 12 hours (that's not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here's 12 from the last 12 hours (that's not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.
I think the answer is either "you don't know enough about the specifics to have actionable advice" or "return to basic principles". I generally think that, had they been open about Altman blatantly lying to the board about things, and that Murati and Sutskever were the leaders of the firing, then I think there would've been (a) less scapegoating and (b) it would've been more likely that Altman would've failed his coup.
But I don't know the details to be confident about actionable advice here.