All of Oxidize's Comments + Replies

Thanks for the advise. I see how the linked posts are a lot more specific than the one I made. I'll try making some posts confined to specific domains of psychology, maybe in a very detailed & rational structure. Then maybe I can link to those posts in a larger post where I use those understandings/pieces of information to make a claim about a vehicle for using the information for practical change in the real world. I'm not sure I'm capable of giving up on macro-directional efforts like attempts to improve humanity as a whole, but I'll try and change the way I structure writings to be self-contained and linked externally for supplemental information as opposed to the entire post being dependent on a linked doc.

4Viliam
Yes, this seems to me like a good strategy for posting on LW. Start with smaller, then generalize (and link to previous posts when needed). One advantage is that when things go wrong -- if one of the smaller articles is strongly rejected -- it gives you an opportunity to stop and reflect. Maybe you were wrong, in which case it is good that you didn't write the more general article (because it would be downvoted). Maybe the LW readers were wrong, but that still means that you should communicate your (smaller, specific) point better, before moving to more general claims. Another advantage is that, if your circumstances or priorities change, and suddenly you don't have time to write for LW anymore, the smaller self-contained articles still provide value. I have seen people make a mistake of posting a long outline first (which sometimes even got lots of upvotes), and then part 2 got downvoted because readers fundamentally disagreed with it... and now what? If someone disagrees with the part 2, they probably won't be happy about part 3 which builds upon the part 2, so now every part would get a downvote.

Thank you for the advise. I'll switch my writing style to be more objective & I'll try to remember to avoid ineffective pandering/creative styles. I'll continue linking at the end of posts when necessary, but I'll try to make sure my initial post provides value to readers.

Thanks for including the link. I'll read through these and use the posts to further my understanding of the community.

Thank you for this comment. I view writing through a marketing context, but I didn't realize that the people on Lesswrong are this motivated by intellectual stimulation/learning. In retrospect it seems obvious, but nonetheless I'm glad to have learned from my mistakes. I'll prioritize using curiosity & supplying new information from now on with more concise references to contexts/background information from now on. And I'll avoid the kind of emotionally targeted tone/structure that I used in my first post.

Thanks for the advice. I want to learn how to make better posts in the future so I'll try to figure out how to improve.

 Should I not have began by talking about background information & explaining my beliefs?
- Should I have the audience had contextual awareness and gone right into talking about solutions?


 Or was the problem more along the lines of writing quality, tone, or style? 
-  What type of post do you like reading? 
- Would it be alright if I asked for an example so that I could read it?


Also you're right. Looking back that... (read more)

1datawitch
I don't really have an opinion on the first two questions. I usually don't read AI posts (especially technical or alignment ones, I'm not an ML engineer and usually struggle to follow them), I read like... stories, everything zvi writes, posts my friends make, things that catch my interest... https://www.lesswrong.com/posts/KeczXRDcHKjPBQfz2/against-yudkowsky-s-evolution-analogy-for-ai-x-risk https://www.lesswrong.com/posts/Q3qoy8DFnkMij4xzC/ai-108-straight-line-on-a-graph https://www.lesswrong.com/posts/D82drnrhJEmPpoSEG/counting-objections-to-housing https://www.lesswrong.com/posts/PaL38q9a4e8Bzsh3S/elon-musk-may-be-transitioning-to-bipolar-type-i https://www.lesswrong.com/posts/DCcaNPfoJj4LWyihA/weirdness-points-1 Those five are sampled from the last two weeks. I also read literally everything zvi posts. I always sort globally by new and then just click on whatever looks interesting Your post had typos and I didn't really like the style but it's hard to point to any one thing. And that's not a crux; if I thought it was valuable I wouldn't really care. My top suggestion is literally just to have put the Google doc in the post.
2Rafael Harth
This is a completely wrong way to think about it, imo. A post isn't this thing with inherent terminal value that you can optimize for regardless of content. If you think you have an insight that the remaining LW community doesn't have, then and only then[1] should you consider writing a post. Then the questions become is the insight actually valid, and did I communicate it properly. And yes, the second one is huge topic -- so if in fact you have something value to say, then sure you can spend a lot of time trying to figure out how to do that, and what e.g. Lsuser said is fine advise. But first you need to actually have something valuable to say. If you don't, then the only good action is to not write a post. Starting off by just wanting to write something is bound to be not-fruitful. ---------------------------------------- 1. yes technically there can be other goals of a post (like if it's fiction), but this is the central case ↩︎
4Viliam
I would suggest choosing a less grandiose topic. Something more specific; perhaps something that you know well. ("What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world." -- source) As a reader I prefer it when the posts are self-contained; when I get a value from the post even without clicking any of the links. The information linked should be optional to the experience. Looking at the topics of my posts... books I have read (1, 2, 3), things happening in the rationality community (1, 2), some psychological things I have noticed (1, 2, 3, 4, 5), questions (1, 2, 3, 4), things that started like comments but turned out to be too long (1, 2, 3), playing with math (1, 2). There is no theory of everything, no proposal to fix humanity, etc.
4lsusr
I recommend you find a post you like that was well received and copy its format. I agree with datawitch that your post "felt like a politician's speech". Your post contains vague grandiose claims, but is lacking on specific factual claims. While that kind of writing does occasionally succeed on this website if you pander hard enough, I recommend against it. Good writing on this website tends to be specific, concrete and objective. I notice you use creative writing styles. While there is value in that, I don't think that's a good way for you, personally, to begin writing on this website. I recommend you learn to write in a more detached, factual style first, before embellishing it in that way. That's because poetic writing can too easily hide unclear thinking. Just look at the karma number next to each post. Ignore any post with less than 50 karma. Pay special attention to any post with more than 100 karma. That will show you more-or-less-objectively what people on this website like reading. If you want to read the best of the best, check out curated. In this context, there are two good uses of links: * Linking to a definition of a term, so that people who don't know the term can find it and people who do know the term don't have to read the definition. * Linking to supplemental information for people that really liked your post and who want to read more. I recommend you do not link to a doc expecting people to read it. People will read a linked document only after they trust you a lot. The best source of trust is "What I just read was really worthwhile". If the first thing you write is "go read this other doc", then you have failed to establish the prerequisite trust.

I'm a 20 year old who perceives myself as the kind of young founder you're probably talking to in this post. And I've noticed a lot of older guys have similar sentiments to you about younger guys and the perspective often annoys me. I do everything I can to learn from other people, but in the context of giving and receiving advice I believe that a lot of information is typically not considered. For example, you talk about a lot of mistakes younger people make that could be easily avoided if they had the older generation's wisdom, but as conveyed by this po... (read more)

2Raemon
  I think in my ideal world this was a series of blogposts that I actually expected people to read all of. Part of the reason it's all one post is that I didn't expect people to reliably get all of them. Partly, I think each individual piece is necessary. Also, kind of the point of pieces like this are to be sort of guided meditations on a topic that let you sit with it long enough, and approach it from enough different angles, that a foreign way of thinking has time to seep into your brain and get digested. I expected people would mostly not believe me without the concrete practical examples, but the concrete examples are (necessarily) meandering because that's what the process was actually like (you should expect the process of transmitting soulful knowledge to feel some-kind-of-meandering, at least a fair amount of the time). I wanted to make sure people got the warnings at the same time that they got the "how to" manual – if I separated the warnings into a separate post, people might only read the more memetically successful "how to" posts. I do suspect I could write a much shorter version that gets across the basic idea, but I don't expect the basic idea to actually be very useful because each of the 20 skills is pretty deep, and conveying what it's like to use them all at once is just necessarily complicated.
4Raemon
I will say I think there are a few different things people mean by burnout, but, they are each individually pretty real. Three examples that come to mind easily: "Overworked" burnout.  If I've been working 60 hour weeks for months on end, eventually I'm just like "I can't do this anymore." My brain gets foggy. I feel exhausted. My body/mind start to rebel at the prospect of doing of more of that type of work. In my experience, this lasts 1-3 weeks (if I am able to notice and stop and switch to a more relaxed mode). When I do major projects, I have a decent sense of when Overworked Burnout is coming, and I time the projects such that I work up until my limit, then take a couple weeks to recover. "Overworked + Trapped" burnout.  As above, except for some reason I don't have the ability to stop – people are depending on me, or future me is depending on me, and if I were to take a break a whole bunch of projects or relationships would come crashing down and destroy a lot of stuff I care about. Something about this has a horrible coercive feeling that is qualitatively different being tired/overworked. Some kind of "sick to my stomach", want to curl up and hide but you can't curl up and hide. This can happen because your boss is making excessive demands on you (or firing you), or simply because I volunteered myself into the position. Each of those feels differently bad. The former because you maybe really can't escape without losing resources that you need. The latter because if I've put myself in this situation, than something about my self-image and how others will relate to me will have to change if I were to escape. "Things are deeply fucked burnout."  This feels similar to the Overworked+Trapped but it's some other kind of trapped other than just "needing to put in a lot of hours." Like, maybe there's conflict at work, or in a close relationship, and there are parts of it you can't talk about with anyone, and the people you can easily talk about it with have

I don’t believe burnout is real. I have theories on why people think it’s real

More interesting would be to hear why you don’t think it’s real. (“Why do people think it’s real” is the easiest thing in the world to answer: “Because they have experienced it”, of course. Additional theorizing is then needed to explain why the obvious conclusion should not be drawn from those experiences.)

Thanks for making a well-thought out comment. It's really helpful for me to have an outside perspective from another intelligent mind.

I'm hoping to learn more from you, so I'm going to descend into a way of writing that assumes we have a lot of the same beliefs/understandings about the world. So if it gets confusing, I apologize for not being able to communicate myself more clearly.



Your 1st point:
This is an interesting perspective shift. The concept that by endeavoring to help people understand suffering, I would be causing suffering itself, since I'd be c... (read more)

1StartAtTheEnd
Thank you! Writing is not my strong suit, but I'm quite confident about the ideas. I've written a lot, so it's alright if you don't want to engage with all of it. No pressure! I should explain the thing about suffering better: We don't suffer from the state of the world, but from how we think about it. This is crucial. When people try to improve other peoples happiness, they talk about making changes to reality, but that's the least effective way they could go about it. I believe this is even sufficient. That we can enjoy life as it is now, without making any changes to it, by simply adopting a better perspective on things. For example, inequality is a part of life, likely an unavoidable one (The Pareto principle seems to apply in every society no matter its type). And even under inequality, people have been happy, so it's not even an issue in itself. But now we're teaching people in lower positions that they're suffering from injustice, that they're pitiful, that they're victims, and we're teaching everyone else that life could be a paradise, if only evil and immoral influences weren't preventing it. But this is a sure way to make people unhappy with their existence. To make them imagine how much better things could be, and make comparisons between a naive ideal and reality. Comparison is the thief of joy, and most people are happy with their lot unless you teach them not to be. Teaching people about suffering doesn't cause it per se, but if you make people look for suffering, they will find it. If you condition your perception to notice something unpleasant, you will see it everywhere. Training yourself to notice suffering may have side-effects. I have a bit of tinnitus, and I got over it by not paying it any attention. It's only like this that my mind will start to filter it away, so that I can forget about it. I don't think you need pain to motivate people to change, the carrot is as good at the stick. But you need one of the two at minimum (curiousity and ot

The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction

Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I'm still largely ignorant of the psychological nuances of the average LW reader.

Like you implied, I did have a narrow audience in mind, and I assumed that LW's algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my p... (read more)

3CstineSublime
Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review] I am still not sure what your post is intended to be about, what is it about "A.I. Extinction" is it that you have new insight into? I stress "new".  As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you've been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed  "A.I." researcher. The A.I. researcher doesn't even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient. Who specifically is the researcher you have in mind who said that humanity has only 5 years?    If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.

Thanks for commenting.

I didn't include the contents in the link because I thought it would make the post too long and I thought it had a different main idea, so I figured it would be better if I made two separate posts. I can't because of the automatic rate-restriction, but maybe maybe it would've been a better post if I included the contents of the linked doc in the post itself. 

I'm realizing that I'm packing an unusually large amount of information within a single post, and I only attempt to fill gaps in information with links & footnotes that w... (read more)

Oh. I linked the wrong thing. I would down vote this too. Sorry about setting an expectation and then not fulfilling it.

Edit: I fixed the link at the end of the post.

It sucks that I have to wait a week before posting anything again though because I made a simple mistake. I guess I'll just have to hope I don't mess up again in the future.

3notfnofn
quick comment: I like the content of the google doc that I've read (so far) but I only clicked on it at all because of this comment (and it's kind of ugly so I may not have even read it if I clicked on it). Out of curiosity, why couldn't it have been included in the post itself? (edit: I don't think "Currently in early stages, so you will need to be sharp and knowledgeable to get the gist of my intentions" is a good idea. It sends signals of unjustified arrogance, even if it ends up being true)

I'm new to LW.  Why was this post downvoted? How can I make better posts in the future? https://www.lesswrong.com/posts/n7Fa63ZgHDH8zcw9d/we-can-survive

1eigen
Myself, I feel like every two weeks or so we see this kind of post with similar style to Eliezer's so it feels repetitive... but I may be wrong though, just my reaction after seeing that post.
6CstineSublime
I can't speak for the community but after having glanced at your entire post I can't be sure just what it is about. The closest you come to explaining it is near the end you promise to present a "high-level theory on the functional realities" that seem to be related to everything from increased military spending to someone accidentally creating a virus in the lab that wipes out humanity to combating cognitive bias. But what is your theory? Your post also makes a number of generalize assumptions about the reader and human nature and invokes the pronoun "we" far too many times. I'm a hypocrite for pointing that out, because I tend to do it as well - but the problem is that unless you have a very narrow audience in mind, especially a community that you are a native to and know intimately, often you run the risk of making assumptions or statements they will at best be confused by, and at worst will get defensive for being included with. Most of your assumptions aren't backed up by specific examples, citations to research. For example, in your first sentence you say that we subconsciously optimize for there being no major societal changes precipitated by technology. You don't back this up. I would assume that part of the reason why there are gold- bugs, just proves there is a huge contingent of people who invest real money based precisely on the fact that they can't anticipate what major economic changes future technologies might bring. There are currently billions of dollars being spent by firms like Apple, Google, even JP Morgan Chase into A.I. assistants, in anticipation of a major change. I could one by one go through all these general assumptions, but there are too many for it to be worth my while. Not only that, most of the footnotes you use don't make reference to any concepts or observations which are particularly new or alien. The pareto principle, Compound Effect, Rumsfeld's Epistemology... I would expect your average Lesswrong reader is very familiar with t

Could I get some constructive criticism about why I'm being downvoted? It would be helpful for the sake of avoiding the same mistakes in the future.

1dirk
I didn't vote, but one possible flaw that strikes me is that it's not as concrete as I'd like it to be—after reading the post, I'm still not clear on what precisely it is that you want to build.

Correct. It lacks tactical practicality right now, but I think that from a macro-directional perspective, it's sensible to align all of my current actions to that end goal. And I believe there is a huge demand among business minded intellectuals and ambitious people for a community like this to be created.

AI isn't really new technology though, right? Do you have evidence of alarmists around AI in the past?

And do you have anecdotes of intelligent/rational people being alarmist about a technology that turned out to be false?

I think these pieces of evidence/anecdotes would strengthen your argument.

What is your estimated timeline for humanity's extinction if it continues on its current path?

What information are you using for the foundation of your beliefs around the progress of science & technology?

2Viliam
I definitely agree that specific examples would make the argument much stronger. At least, it would allow me to understand what kind of "false alarms" are we talking about here: is it mere tech hypes (such as cold fusion), or specifically humanity-destroying events (such as nuclear war)? I think we didn't have so many things that threatened to destroy humanity. Maybe it's just my ignorance speaking, but the nuclear war is the only example that comes to my mind. (Global warming, although possibly a great disaster, is alone not an extinction threat to entire humanity.) And mere tech hypes that didn't threaten to destroy humanity don't seem like a relevant category for the AI danger. Perhaps more importantly, with things like LK99 or cold fusion, the only source of hype was "people enthusiastically writing papers". With AI, the situation is more like "anyone can use (for free, if it's only a few times a day) a technology that would be considered sci-fi five years ago". Like, the controversy is about how far and fast it will get, but there is no doubt that it is already here... and even if somehow magically the state of AI would never improve beyond where it is today, we would still have a few more years of social impact at more people would learn to use it and find new ways how to use it. EDIT: By "sci-fi" I mean, imagine creating a robotic head that uses speech recognition and synthesis to communicate with humans, uploading the latest LLM into it, and sending it by a time machine five or ten years into the past. Or rather, sending thousands of such robotic heads. People would be totally scared (not just because of the time travel). And finding out that the robotic heads often hallucinate would only calm them down a little.

How do you think competent people can solve this problem within their own fields of expertise? 

For example, the EA community is a small & effective community like you've referenced for commonplace charity/altruism practices. 

How could we solve the median researcher problem & improve the efficacy & reputation of altruism as a whole?

Personally, I suggest taking a marketing approach. If we endeavor to understand important similarities between "median researchers", so that we can talk to them in the language they want to hear, we may be a... (read more)

What do you mean by red flag? Red flag on the author's side? If so, I don't understand your sentiment here.
Partisan issues exist.

I don't understand what you're saying here, but I want to understand.

Can you explain it like I'm 5?

8Viliam
The hypothetical most popular president (from the perspective of the entire population) would lose in the primaries. Their own party would never nominate them, because they would seem like a sell-out to them. Imagine the following 3 candidates: A -- strongly against illegal immigrants, and quite racist against the legal ones B -- suggests to stop illegal immigration, but make the legal immigration much easier C -- wants to make immigration easier; refuses to debate illegal immigration because "no one is illegal" A would win the Republican primaries, C would win Democratic primaries. B would lose both. For most people, B would be preferable to either A or C. But they won't get to make that choice. They will have to choose between A and C. . Let's make it more complicated, and split the candidate B into two similar candidates: B-R and B-D. Both B-R and B-D have the same position on immigration, but on different topics, B-R leans slightly Republican, and B-D leans slightly Democrat. A hypothetical rational Republican might say "let's nominate B-R for our party, because they are most likely to get elected, and at least they agree with us on many other issues -- the 100% chance of B-R winning is preferable to a 50% chance of A and a 50% chance of C". A hypothetical rational Democrat might similarly prefer a 100% chance of B-D over a 50% chance of A and a 50% chance of C. (And basically, this is what the median voter theorem suggests: that the election will ultimately be between B-R and B-D, rather than between A and C.) But in a situation with primaries, B-R will lose to A, and B-D will lose to C. . I suspect that to a smaller degree this might be a problem with political parties in general, even if without primaries it is probably much smaller. Individuals need allies to win, and people like B-R and B-D won't find many enthusiastic allies in their respective parties.