Thanks anyway :)
Also, yeah, makes sense. Hopefully this isn't a horribly misplaced thread taking up people's daily scrolling bandwidth with no commensurate payoff.
Maybe I'll just say something here to cash out my impression of the "first post" intro-message in question: its language has seemed valuable to my mentality in writing a post so far.
Although, I think I got a mildly misleading first-impression about how serious the filter was. The first draft for a post I half-finished was a fictional explanatory dialogue involving a lot of extended metaphors... A...
Thanks! :)
Yeah, I don't know if it's worth it to make it more accessible. I may have just failed a Google + "keyword in quotation marks" search, or failed to notice a link when searching via LessWrong's search feature.
Actually, an easy fix would just be for Google to improve their search tools, so that I can locate any link regardless of how specific for any public webpage just by ranting at my phone.
Anyway, thanks as well to Ben for tagging those mod-staff people.
Hey, I'm new to LessWrong and working on a post - however at some point the guidelines which pop up at the top of a fresh account's "new post" screen went away, and I cannot find the same language in the New Users Guide or elsewhere on the site.
Does anyone have a link to this? I recall a list of suggestions like "make the post object-level," "treat it as a submission for a university," "do not write a poetic/literary post until you've already gotten a couple object-level posts on your record."
It seems like a minor oversight if it's impossible to find certa...
Thanks! It's no problem :)
Agreed that the interview is worth watching in full for those interested in the topic. I don't think it answers your question in full detail, unless I've forgotten something they said - but it is evidence.
(Edit: Dwarkesh also posts full transcripts of his interviews to his website. They aren't obviously machine-transcribed or anything, more like what you'd expect from a transcribed interview in a news publication. You'll lose some body language/tone details from the video interview, but may be worth it for some people, since most ...
I am not an AI researcher, nor do I have direct access to any AI research processes. So, instead of submitting an answer, I am writing this in the comment section.
I have one definite easily sharable observation. I drew from this a lot of inferences, which I will separate out so that the reader can condition their world-model on their own interpretations of whatever pieces of evidence - if any - are unshared.
This interview in this particular segment, with the most seemingly relevant part to me occuring around roughly the timestamp 40:15.
So, in this segment...
(edit: formatting on this appears to have gone all to hell and idk how to fix it! Uh oh!)
(edit2: maybe fixed? I broke out my commentary into a second section instead of doing a spoiler section between each item on the list.)
(edit3: appears fixed for me)
Yep, I can do that legwork!
I'll add some commentary, but I'll "spoiler" it in case people don't wanna see my takes ahead of forming their own, or just general "don't spoil (your take on some of) the intended payoffs" stuff.
...I'm pretty sure GPT-N won't be able to do it, assuming they follow the same paradigm.
I am curious if you would like to expand on this intuition? I do not share it, and it seems like one potential crux.
I do not share this intuition. I would hope that if I say a handful of words about synthetic data, that will be sufficient to move your imagination into a less certain condition regarding this assertion. I am tempted to try something else first.
Is this actually important to your argument? I do not see how it would end up factoring into this problem, except...
I am not sure if this has been well enough discussed elsewhere regarding Project Lawful, but it is worth reading despite some fairly high value-of-an-hour multiplied by the huge time commitment and the specifics of how it is written adds many more elements to "pros" side of the general "pros and cons" considerations of reading fiction.
It is also probably worth reading even if you've got a low tolerance for sexual themes - as long as that isn't so low that you'd feel injured by having to read that sorta thing.
If you've ever wondered why Eliezer describes hi...
I have been contemplating Connor Leahy's Cyborgism and what it would mean for us to improve human workflows enough that aligning AGI looks less like:
Sisyphus attempting to roll a 20 tonne version of The One Ring To Rule Them All into the caldera of Mordor while blindfolded and occasionally having to bypass vertical slopes made out of impossibility proofs that have been discussed by only 3 total mathematicians ever in the history of our species - all before Sauron destroys the world after waking up from a restless nap of an unknown length.
I think this is wh...
I cannot explain the thoughts of others who have read this and chose not to comment.
I would've not commented had I not gone through a specific series of 'not heavily determined' mental motions.
First, I spent some time in the AI recent news rabbit hole, including an interview with Gwern wherein he spoke very beautifully about the importance of writing.
This prompted me to check back in on LessWrong, to see what people have been writing about recently. I then noticed your post, which I presumably only saw due to a low-karma-content-filter setting I'd disabled... (read more)