Goal = Save the world.


See Ending Ignorance | Solving Ignorance[1] & We can Survive for greater context. (Includes defining of terms like System, Functional, functional reality, functional system, functional systems psychology, High-level, tactical, conscious, subconscious, Reward, Punishment, Satisfaction, Suffering.)

 

Use the table of contents on the left. 

Read footnotes or suffer in confusion

Read links if you need additional context

Easier to read format 

 

Tech problem[2]

End human suffering

 

Ever felt like shit?

Ok, of course you have, bad question. Do you remember how it felt?

Do you remember the actions you took as a result of it?

When you were suffering, you didn’t give a s*** about EA.

The only thing that mattered to you… was you.

You were one selfish bastard.

And that’s how everyone else is as well.

If you think from a Maslow’s hierarchy of needs perspective, you’ll note that there is actually a hierarchy of human priorities that affect decision-making in significant ways.

China has experienced a technological growth rate unparalleled by any nation on earth at any point of time.

The first generation was one of farmers, the second was of factory workers, and the third are specialized intellectual workers like you and me.

The result of this rapid change is a different cultural norm.

Unlike in the U.S/Europe, altruism is not an obvious & undebated ideal to be pursued for the layman.

I know what it’s like to not have enough food to eat. I’ve eaten 0-1 meals/day for a year and went 9 days without food.

Your mind curbs and twists your “rationality” into whatever will get you food.

Your unmet expectations, desires, lack of energy, and the gap between the pain you’re experiencing right now and the pleasure you believe you should feel are major barriers to getting anything done. And the fact that you’re in that state, cripples your actual ability and reinforces the actions & environments that keep you in that state.

I don’t think any of us are arguing that basic needs like food and shelter must be met if we want greater worldwide participation in EA as a concept.

But I am arguing that we’re thinking about the problem wrong.

And I argue this because those 9 days of eating nothing were the most productive 9 days of my life.

I argue this because that period of 0-1 meals/day for a year is me right now, and I’m still making progress towards my goals.

By using my own reward/punishment system as a test dummy, I’ve discovered important aspects of some of the real psychological systems in our mind that have significance relative to reward, punishment, and EA.

After a lot of testing and suffering, I’ve found out the necessary prerequisites to EA motivation.

  1. Suffering & satisfaction.
  2. Social influences | centrist theory | Early adopters vs the layman
  3. A specific system within the brain [3]

Suffering & Satisfaction (The 7 Premises for this being 1 of 3 prerequisites for saving the world.)

  1. By giving people the intellectual tools necessary to be satisfied in their daily life, they will gain goodwill towards EA as a concept, increase their overall competency at a more rapid rate, be more giving, and naturally gravitate toward EA activity provided they have the foundational values/ideals of rationality & selflessness. (Or rationality & selfish status gain)
  2. By creating a system where participation in EA leads to a satisfactory reward chain of activities, consistency of activity will be incentivized and therefore prevalent.
  3. By bringing [4]attention to tactical/emotionally pulling patterns of suffering, people will recognize it in their own life, and we will create an unfulfilled desire that only we have the solution for.
  4. Fundamentally, everyone will suffer no matter what. The deeper your understanding of Functional systems psychology, the more likely you are to come to this conclusion. In other words, once you reach this level of understanding your perception of the world state with the highest reward for yourself becomes the positive AI future, and thus you will work towards it.
  5. We can make this “feel real.” Through manipulation of media/entertainment, values, and perceptions of the world.
  6. Personal concurrent life-satisfaction is possible in-spite of punishment/suffering when punishment/suffering is perceived as a necessary sacrifice for an impending reward.
  7. Suffering can be significantly reduced without reducing psychological punishment through managing of expectations and implementation of psychological toolkits.

Again, I have tested all of these claims on myself. Aside from the story about china, I’m not parroting things I’ve read/heard on the internet. I’m speaking from experience so come at my argument from as many perspectives as you can and try and tear it apart. My argument is based on real systems & functions, so I’m confident that disagreements will primarily come from informational discrepancies between me and readers, and I will be happy to converse on the clarification of those discrepancies so that I can add them to future docs for people’s future clarifications.

Counterarguments:

1. Because of unknown unknowns and the lack of clarity with which we’ve determined EA to be an effective method of avoiding the next fermi filter, EA may not be an effective way of avoiding the next fermi filter. As in, even if there were 1 million Elizer Yokowskys, nothing would change relative to the next fermi filter.

  • In which case, we’re all f***** because we are products of EA. And if the rule is that EA has no functional difference on the outcome of the human species/intelligent life in the universe, we are also bound by that rule[5].
  • Our intuition tells us otherwise, and we are the smartest humankind has, so we’re going to have to act as if we’re correct. (Because we actually are correct. If every human on earth was Elon Musk, the likelihood we pass the next filter is high)

2. (Your comment)

Reader, give me your best counterargument and lets have a rational conversation!


Social influences[6] | centrist theory [7] | Early adopters vs the layman[8]

 


 


A specific unnamed system within the brain[9]


Create a community

  • As referenced earlier, the impact of social influences, centrist theory, and the concept of early adopters as opposed to the layman make successful creation of a community paramount to the virality of EA psychology.
  • I have a separate doc[10] on this.
     

Spread logical decision making

  • Even if people are incentivized to “do the right thing”. We are still in trouble if the average human is as stupid as I am. From a marketing perspective, there is currently massive demand to learn. We just need to give people the tools they need to do so.
  • I will create the high-level and the structure, and I will depend on this community to help me encompass all relevant information, create a consumer-oriented experience, iron out my personal biases, logical errors, and flesh out the info from high-level concept to situation specific action items/valuable information sources.
     

World management problem (Incentives[11])

If we solve the tech problem, there’s still the problem of the bureaucracy imploding. This is a problem to be solved that I am not focusing much on right now aside from subconsciously optimizing for a few concepts from a long-term positional perspective so that when the time is right, the EA optimized functional systems in the world are in an optimal position to tackle this problem.

 

Conclusion

I’ve read a few posts on this community, and I was glad to see a level of rationalism I couldn’t find anywhere else. But from a human extinction perspective, the world needs more than rationalism.


It needs rationalism, functionalism, systems thinking, business/realism, and marketing.

Here is a story that I’ll think you’ll all enjoy.

In a galaxy far far away, there was a 1 in a trillion planet. As an astronomical anomaly, it was a planet just like ours. There were viruses, bacteria, insects, reptilia, flora, and mammals. There was even a species of monkey that used rocks as tools. On this planet, there was an ant colony. This ant colony was one of millions in vicious competition with each other. And right when it was on its last leg, something happened. The ants simple brains started forming parts of a complex neurological structure that resembled a level of agency intelligence similar to humans. The ants started manipulating other ant’s pheromones to trick and defeat them. They started not just destroy, but consume the resources of other ant colonies much like imperialists nations have done in the past. They assassinated queens and placed their queens in charge, and through pheromone mimicry the other ant colony never figured it out. The queen then starved the old ants and when the queen laid eggs and created new ants, they fed on the old colony’s carcasses. With their superior intellect, the ant colony quickly grew and took over other colonies, increasing its intellectual capacity. But then something happened. Like the pattern present in all ants, the colony started waging war with other colonies of the same species that have different queens over resource scarcity!

So we can assume that the rule of Moloch must be applied here. As the ant’s reward modeling system would evolve to optimize towards whatever leads to the growth/benefit of the colony. Any colony that doesn’t do this, would be defeated by colonies that do. The same is true with technological progress. This is one way that this theoretical ant colony, and modern society are identical. Additionally, the universe tends towards simplicity, so it is infinitely likely that the reward system would be towards progress-adjacent models as opposed to directly optimized models, (as evidenced in human brain reward systems & AI reward systems) However, in spite of all these similarities present in all instances of agency as a universal concept, there is one principle that makes humans different from ant colonies. That is the sheer lack of individualism present in individual ants that make up the colony-wide neural network as a contrast to present society. Individualism is the ultimate enemy of Molarch. If nuclear fallout is a fermi filter, then Elon Musk’s efforts to get a self-sustaining colony on mars is a direct challenge to that filter. Elon Musk could not exist in an ant colony, or any form of collaborative intelligence society, and Molarch would be absolute.

I tell this story to illustrate the fact that given the nature of biological evolution and power, it is probably far more likely that a species of collaborative intelligence would reach world dominance.

And when this happens 999999999999999/100000000000000 times, Moloch definitely wins every time. Moloch is winning even in our present society where individualism is a significant psychological value of a significant portion of society. 
 

Furthermore, the humans species has an opportunity that it has never had before, because the intellectual capital present in the world is greater than ever before. The thing is, no one is realizing this or utilizing it because humans can’t think on long time horizons. We only think about right now, and the systems currently in place within our society are legacy. They were created in a time where people couldn’t read, write, and do basic arithmetic. And those were values to aspire to. And like the stupid socially-reward optimized creatures we are[12], we somehow convince ourselves that operating under these crippling systems is ok.

Here is a second story that I think you’ll enjoy.

In a galaxy far far away, there was a planet just like ours. In fact, it was exactly like ours. All of the same people existed, and they were all the exact same places. Except there was one difference. Stanislav Petrov was to be promoted  to Lieutenant Colonel in 1984. So he wasn’t the Lieutenant Colonel on September 26th, 1983, when the Russians received a false alarm of a U.S (Nuclear) missile attack, and he chose not to alert his superiors. Given that we live in a conformist society where the dominant intellectual agents value social rewards, status, authority, and lack of responsibility as opposed to individualism, it is a miracle that Stanislav Petrov chose not to alert his superiors and start a possible nuclear war. In this alternate world where Stanislav wasn’t there, if we use rationalist principles as opposed to anecdotal evidence, you can guess what would’ve happened.  

In another 99999/100000 planets exactly like ours, Britain, won the American revolution. So libertarianism didn’t spread. So the average nation today operates like China. (which strengthens Molarch.)

And finally, I want to introduce the concept of luck, perspective, anecdotal history, and their relevance to human extinction.

Think for a second. How likely is it that you exist right now, and that you’re reading this. Almost infinitesimal right? But how can that be? How can an infinitesimal likelihood occurrence actually exist? It’s impossible to win the lottery right? Well, think from another perspective. For people who’ve already won the lottery, it is 100% likely for the lottery to be won. In other words, just because we’ve gotten lucky as a species thus far, does not mean human extinction was not almost infinitely likely in the past. My intellectual foundations are rooted in the existence of the internet, and profound intellectuals who chose to share their beliefs about the world are prerequisites for me to exist from a functional perspective.

So come on rationalists, let’s be rational. Anecdotal evidence of what almost happened is just real-world evidence proving this theory’s basis in reality[13]

And if we won’t be rational. No one else will. And we will just be one of the many world-states the fails. We’ve got to act now, and we’ve got to move fast, or our children will never grow up.

To finish this off,  here is a third story that I think you’ll enjoy.

In the milky way galaxy, on a planet called earth, there was a small child.

It was his first day at school, and he decided to read with the other children. He picked out a book, sat next to another child, and said, “Let’s read together!” They did so, but as soon as they started, the child noticed something was off. The boy next to the child was reading very slowly. Pausing after every syllable, annunciating every letter, and speaking without comprehension of the words he spoke.

The child was upset, so he stood up, walked to another child, and said, “Let’s read together!”

The same thing happened. And when the day was over, the child, confused, asked his mother a question. “Why? Why can’t they read?” And the mother didn’t have an answer. Nor did the teacher, or his father, or anyone else the child had access to. So the child dismissed the thought.

Instead, From that day onward, the child read alone.

The child learned alone,

The child researched alone.

And the child’s intellectual capacity sprouted relative to the context of what was possible, instead of what the system of the child’s environment wanted to mold them into.

That child was me. And that child was you as well.

We are the early adopters of EA. We have goodwill relative to the concept of helping other people, and we have the intellect to grasp high-level concepts and see them brought to fruition in the real world.

Each of our impacts needs to be substantial, because we are all society has.

Please help me. 

I can’t do this alone.
 

Asking for collaboration

This post is meant for me to receive feedback as opposed to making a long-term commitment that’s isolated from a perspective outside of my own.

tear it apart, be as critical as possible, and presume that I’m an idiot and everything I’ve written was done so out of complete ignorance to the ways of the world.

Nothing you could type on a digital screen will ever offend me since my perception of pain is anchored towards things like not eating anything for 9 days.

  1. ^

    Currently in early stages. Forgive me for linking to a group of unfinished thoughts.

  2. ^

    Proposed Solutions to the tech problem:

    1. End human suffering
    2. Create a community
    3. Spread logical decision making

  3. ^

    (A person’s perception of the world state with the most psychological reward or satisfaction for themselves) = the decision calculus for this system.  ((I haven’t figured out if it’s reward or satisfaction, and I haven't thought of a name for this system.))


     

  4. ^

    Everyone suffers, we are just often not aware of our patterns o fsuffering.

  5. ^

    This is a wide-arching imaginary rule for purely theoretical play. As it has no basis in reality, it is unlikely to be true.

    And anyone who uses this to take a defeatist perspective on EA, simply does so because they have personal incentives to avoid EA and want to not feel guilty about it. 

  6. ^

    (For context, read ending Ignorance)

    Most people receive a significant amount of their psychological reward from social circles.

    In other words, their focus, attention, reward, actions, etc. are relative to the perceived responses of other people around them. 

    By providing the (Much desired, needed, and unfulfilled community) (As evidenced by the demand for social media, prevalent loneliness, etc.)

    We can make the layman receive psychological reward as a consequence of acting as a part of EA

  7. ^

    (Political/sociological/logical-realist theory of human psychology.)

    Laymen gravitate towards where they perceive the center to be. 

    From a universal/physics/reality perspective, human's center

    For example, many U.S republicans think  LGBTQ policy/culture is madness. (Gender spectrum, trans-women in womens sports, early gender education. (I'm aware these can be seen as strawmen, I'm making a point))

    From a universal perspective, family values, conservatism as a concept, and religiousness are equally as wild as any values expressed in LGBTQ communities.

    Internet communities lean right, where conservatist values are considered more centrist. 

    While in - academia/government/bureaucracy, leftist culture is considered more centrist.

    To reiterate, I don't care about politics. I use these examples to prove a point about centrism. Our political views are influenced more by our environment than any of or core personal qualities, because most personal qualities are malleable relative to what we perceive to be our most-optimal course of action in pursuit of our own self-interest

  8. ^

    (Startup/technical/SaaS/Business world Jargon that means a subset of a population that has goodwill to spend and is willing to take a desired action even without perception of short-term selfish gain.)

  9. ^

    I haven't thought of a name for it yet, but there is a system in the brain people know as "the logical mind" "the conscious", "that voice in my head" etc. 

    As opposed to other reward systems within the brain, this system takes a long-term approach to reward optimization. 

    What it believe you "should" do is dictated by your subconscious perception of the situation with maximal future reward. (This is referring to another system in the brain covered in functional systems psychology)

    By using a functional systems psychology understanding of reward, we can use currently prevalent opportunity vehicles like media to change the layman's perception of what world state is most optimal for their selfish reward maximization into one that is optimal for EA.

  10. ^

    In early phases. Forgive me if its unreadable...

  11. ^

    Media
    Values
    Perception of the world

    Systemic reward & punishment
    Parenting
    School systems
    Power process
    Perception of future | Macro problems
    Extracurricular opportunities
     

    +

    Role of Moloch vs intention in shaping society at high levels. 

    Probably fixable by using systems/power to shift incentives

  12. ^

    As explained earlier in social influences, centrist theory, and early adopters vs the layman


     

  13. ^

    This is another concept covered in "Ending Ignorance." that has to do with realism, the dysfunctionality of modern theories, and the human overestimation of the capability of future prediction


     

New Comment
3 comments, sorted by Click to highlight new comments since:

I like this post, but I have some problems with it. Don't take it too hard, as I'm not the average LW reader. I think your post is quite in line with what most people here believe (but you're quite ambitious in the tasks you give yourself, so you might get downvoted as a result of minor mistakes and incompleteness resulting from that). I'm just an anomaly who happened to read your post.

By bringing attention to tactical/emotionally pulling patterns of suffering, people will recognize it in their own life, and we will create an unfulfilled desire that only we have the solution for.

I think this might make suffering worse. Suffering is subjective, so if you make people believe that they should be suffering, or that suffering is justified, they may suffer needlessly. For example, poverty doesn't make people as dissatisfied with life as relative poverty does. It's when people compare themselves to others and realize that they could have it better, that they start disliking what they have at the moment. If you create ideals, then people will work towards archiving them, but they will also suffer from the gap between the current state and what's ideal. You may argue "the reward redeems the suffering and makes it bearable", and yes, but only as long as people believe that they're getting closer to the goal. Most positive emotion we experience is a result of feeling ourselves moving towards our goals.

Personal concurrent life-satisfaction is possible in-spite of punishment/suffering when punishment/suffering is perceived as a necessary sacrifice for an impending reward.

Yes, which is why one should not reduce "suffering" but "the causes of unproductive suffering". Just like one shouldn't avoid "pain", but "actions which are painful and without benefit". The conclusions of "mans search for meaning" was that suffering is bearable as long as it as meaning, that only meaningless suffering is unbearable. I've personally felt this as well. One of the times I was the most happy, I was also the most depressed. But that might just have been a mixed episode as is known from bipolar disorder.
I'm nitpicking, but I believe it's important to state that "suffering" isn't a fundamental issue. If I touch a flame and burn my hand, then the flame is the issue, not the pain. In fact, the pain is protecting me from touching the flame again. Suffering is good for survival, for the same reason that pain is good for survival. The proof is that evolution made us suffer, that those who didn't suffer didn't pass on their genes.

We are products of EA

I'm not sure this is true? EA seems to be the opposite of darwinism, and survival of the fittest has been the standard until recent (everyone suddenly cares about reducing negative emotions and unfairness, to an almost pathological degree). But even if various forces helped me avoid suffering, would that really be a good thing?

I personally grew the most as a person as a result of suffering. You're probably right that you were the least productive when you didn't eat, but suffering is merely a signal that change is necessary, and when you experience great suffering, you become open to the idea of change. It's not uncommon that somebody hits rock bottom and turns their lives around for the better as a result. But while suffering is bearable, we can continue enduring, until we suffer the death of a thousand papercuts (or the death of the boiling frog, by our own hands)
That said, growth is usually a result of internal pressure, in which an inconsistency inside oneself finally snaps, so that one can focus on a single direction with determination. It's like a fever - the body almost kills itself, so that something harmful to it can die sooner.

We are still in trouble if the average human is as stupid as I am.

Are you sure suffering is caused by a lack of intelligence, and not by too much intelligence? ('Forbidden fruit' argument) And that we suffer from a lack of tech rather than from an abundance of tech? (As Ted Kaczynski and the Amish seem to think)
Many animals are thriving despite their lack of intelligence. Any problem more complicated than "Get water, food and shelter. Find a mate, and reproduce" is a fabricated problem. It's because we're more intelligent than animals that we fabricate more difficult problems. And if something was within out ability, we'd not consider it a problem, which is why we always fabricate problems which are beyond our current capacities, which is how we trick ourselves into growth and improvement. Growth and improvement which somehow resulted in us being so powerful that we can destroy ourselves. Horseshoe crabs seem content with themselves, and even after 400 million years they just do their own thing. Some of them seem endangered now, but that's because of us? 

Bureaucracy

Caused by too much centralization, I think. Merging structures into fewer, bigger structures causes an overhead which doesn't seem to be worth it. Decentralizing everything may actually save the world, or at least decrease the feedback loop which causes a few entities to hog all the resources.

Moloch

Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech. Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards.

The tools you wish to use to solve human extinction are, from my perspective, what is currently leading us towards extinction. You can add AGI to this list of things if you want.

Thanks for making a well-thought out comment. It's really helpful for me to have an outside perspective from another intelligent mind.

I'm hoping to learn more from you, so I'm going to descend into a way of writing that assumes we have a lot of the same beliefs/understandings about the world. So if it gets confusing, I apologize for not being able to communicate myself more clearly.



Your 1st point:
This is an interesting perspective shift. The concept that by endeavoring to help people understand suffering, I would be causing suffering itself, since I'd be creating an expectation/ideal that virtually no one in society has met. I agree with this point, and I have a lot of perspectives on this. I'm curious about what you think of them.

My 1st perspective:  The marketing perspective.

Since I want people to change, I want them to be in pain or suffer. Since without pain, the motivation to change is typically weak or non-existent. As cruel an non-idealistic as it may be, I don't endeavor to be an idealist. I endeavor to be realist and rationalist. People have to feel pain in order to take actions necessary to better their lives. No one would imply that a farmer shouldn't put in the work to plant crops. The village's desire to eat, is of greater importance than the farmer's desire to feel physically comfortable.

I think that if I were trying to cater to idealist perspectives, I would need to emphasize suffering being bearable when in pursuit of reward. On some level, I understand how this process works in my own psychology and have been able to manipulate it, but my ability to help other people do the same thing remains uncertain. I think that there are multiple other sources of reward that can be used for this purpose other than goal-progress oriented reward, (Such as food, media entertainment, or social) but I agree that showing people that they are making clear progress will be very beneficial for increasing the acceptability of a person's suffering in this context.

My 2nd perspective: Which subsets of people will experience increased suffering as a result of the implementation of the ideas expressed in this document?

I don't think this would negatively impact people who are currently already unsatisfied with their life. Since it would just shift their focus from their current ideals/unfulfilled desires to the new one that's being presented. 

(Except this time, since they understand the systems at play in their mind and in their environment, they will actually have the power to change things)

So people won't need to rely on a large set of trial & error + heuristics to perceive that they've reached their ideal. 

I could be wrong on the aforementioned point. If instead of shifting focuses, a person adds this to a long subconscious list of things they should be that they are not, it could increase suffering even in people who are already dissatisfied with life. If testing reveals that this is the case, I think that unraveling sets of beliefs about what a person should be will help alleviate this suffering. (This would be done by utilizing marketing tactics such as storytelling)

but for subsets of people who are currently satisfied with their life, I don't think that introducing new ideals will increase suffering. There are lots of current societal ideals/expectations that re near-impossible to live up to. If this subset of people are satisfied even in spite of the external ideals/expectations they are unable to embody, I see no reason to believe that introducing a new ideal would cause this subset of people to suffer.

Although It could be argued that the reason these people are satisfied is because they've fooled themselves into believing they are meeting those impossible societal ideals/expectations. Even if this is argued, I think they would just continue to fool themselves by using the mental processes that have reduced their suffering in the past.

 

My 3rd perspective: Worst case scenario:

I have some critically inaccurate belief about the world/other people. And in reality, the only thing I'll be able to do is show people how life could be better without actually being able to get anyone to change in this way.


Your 2nd point

Completely agree. It's good that you caught me on this point, because in reflection, if I don't clarify the core of the problem more clearly, the problem could easily be mispercieved as something far more simple than is necessary to actually improve people's lives. I almost fell into the"One man's utopia is another man's hell" archetype. I think I'm just unaware about how to implement this without getting too deep into psychology and a lot of interrelated concepts.

If I was solely speaking to an audience that had your level of understanding of human psychology as it relates to suffering, then I could immediately begin clarifying my position on deeper concepts related to suffering.

But since the average person doesn't have a concept of suffering as an isolated concept from pain, I think that it could be difficult to help people make that conceptual jump.

I do intend to make a few separate versions intended for different target audience levels of intellect, so I might be able to solve this problem with my implementation here.


Your 3rd point
In retrospect, I realize that I was very liberal with my usage of the term "EA", and I made no effort to clarify what I meant by my usage of the term. Just to be sure we're on the same page, When I say "We" in "we are products of EA", I'm referring to a hypothetical group of people who want to prevent human extinction. (The target audience of this post). I definitely don't mean to imply that another person or group's altruistic deliberations had anything to do with our current beliefs or abilities. I'm also not referring to any organization or group of people when I say "EA", What I mean is the broader state of altruistic tendencies among intellectuals/rational people as a movement.

I realize now that I made an unjustified assumption that efforts for the prevention of human extinction would have altruistic motivations. And as you've pointed out, I may be wrong.


"Would reducing suffering really be a good thing?"

 I'm thinking about this question. And I realize I'm not actually qualified to answer it.

From a humanitarian or idealist perspective, obviously we should reduce pointless suffering and help people gain the tools necessary to deal with or accept suffering. But from a complexity or future-prediction perspective, it is difficult to know what the far-reaching consequences of this hypothetical world would be. If people know how to consciously reduce their own suffering and engineer their environments in a way that makes life more satisfactory for them, the results could be akin to a sort of malignant idea virus. As we've both said, without suffering, there is no change, By democratizing these tools, we could be erasing a core and crucial element of human change. Since choosing suffering as opposed to satisfaction is completely contradictory to human nature, once a person receives the information, they could be forever changed for the worse.


I also grew because of my suffering. And In this post, I meant to say that I was the most productive when I didn't eat. Not eating only significantly reduced my productivity when how much I ate was outside of my conscious control.

Suffering has made me into the person I am, and I am an avid believer in the pro-social benefits of personal suffering. But it is still true that unacceptable suffering outside of an individual's control is what causes the scarcity mindset. (Eradicating the scarcity mindset & increasing the prevalence of altruistic perspectives is the whole point of reducing human suffering in the context of preventing human extinction.) 

Additionally, there are certain important aspects of human psychology that I'm still unsure about. For example, to what degree does pain as a stimulus cause change? To what degree does suffering cause change?  To what degree does an individual's acceptability of suffering affect change?

And to the 3 aforementioned questions, in what way?


Your 4th point

My fault for not better clarifying my perspective here.

I'm claiming that even if scarcity mindsets fade and altruism spreads, we will still go extinct if the average person is as dumb as I am. "Create a community", and "Spread logical decisionmaking" are complementary to the concept of reducing human suffering, but their ultimate purpose is the continued existence of the human species. They are 2 of 3 points aimed at managing humanity's likelihood of destroying itself with some form of technology.

I personally think that intellect past a certain level gives humans the ability to deliberately manipulate their suffering, but I think that on the spectrum with which every human I'm aware of exists lies, intellect does not seem to make suffering controllable in any non-proxy way. In other words, I don't think that tech or intellect are strong predictors relative to suffering at our current capabilities for tech and intellect.

To respond to your point on anything other than survival & reproduction being a fabricated problem, I would agree. I feel like you can always meta-contradict the concept of meaning, since human meaning is constructed by our psychological systems, rather than by any universal truth. We can argue that pain is bad, and pleasure is good. Only to realize that those two concepts are only proxies for what we actually optimize for, suffering and satisfaction. So then we can argue that suffering is bad, and satisfaction is good. Only to gain perspective on the concept of agency, and to realize that suffering and satisfaction are just mesa-optimizers for evolution. I believe this process can go on an on, but the reality is that we are still mesa-optimizers, and so we have a natural inclination towards certain things and an intuition to call it meaningful. Anyone can take a nihilistic perspective an argue that nothing is meaningful, but if nothing is meaningful, then there is also no point in doing nothing, and we can do whatever we want. So I think the concept of meaning is a sort of recursive argument that we don't have the intellectual tools to solve. So rather than do nothing, I think we should do what we can.
 
Your 5th point (Bureaucracy)
I don't know anything about how decentralization can help with the problem of bureaucracy. Maybe you can point me to a source of information that will help me see your perspective on this? 

I'm also interested in your perspective on a few entities within a bureaucracy hogging all the resources. I presume you're referring to management claiming credit or capital distributions received by the owner class?

I look at bureaucracy from a business perspective. They call it operational complexity, and it refers to the reduced level of control of the founder over a business as the organization/tasks get more complex.

As the founder loses control over incentive structures, hiring practices, and training, the impact of the founder's competence dilutes and regresses towards the mean of the average human. 

It also refers to increases in tasks that are not the actual revenue-producing activity. An example would be salesman filling out excel sheets instead of talking with prospects.

Another example would be the increased training time required for adding an additional task to a specific role, and the increased training time for the people who are training those people, and the butterfly effect that has on the entirety of the business.

Also, the increased need for competent problem-solving as old systems break, and new systems (Or slight variations of old systems) become necessary.

My shortest definition on how I see Bureaucracy = reduced efficiency in multiple ways for multiple reasons as organizations grow larger.


Your 6th point (Moloch)
From my understanding, Moloch = a more complex representation of the prisoner's dilemma. 

The state of everything growing worse as a result of competition & a lack of control resulting from the fact that you either pool resources in a locally helpful but globally harmful way, or you seize to exist.
 

Your take: "Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech."

I wonder about what basis/set of information you're using to make these 3 claims? I am currently unable to respond in a productive way without further context.


"Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards."

And I may have misunderstood your point here, but from my understanding you're arguing something like: "Why aren't we in a state of perfect Moloch in present society?" (I'm not sure what you mean by "optimal strategies weren't visible".)  And when you say going back instead of forwards, you seem to be implying that the solution to preventing human extinction as it relates to tech is simply to remove technology.

Which in my opinion, will inevitably lead future generations back to this exact point in technological capabilities, and they might not also decide to regress technological capabilities since their society would be vastly different than our own in terms of culture-based differences such as values & identity. I don't see the reasoning behind pushing the problem forward onto later generations as opposed to attempting to solve the problem for good.

It'd also be great if you could point me to a piece of writing on world legibility and transparency. Since I don't currently have the context with which to understand what those two things mean.




Your final point

What's leading humanity to extinction in my opinion: 
1. Increased technological capabilities
2. Our lack of ability to control the impacts of these capabilities

3. our lack of incentive to manage those capabilities.

4. Our lack of control over the first 3 points.

 

I agree that my proposed solutions for the problems we face are reliant on systems currently driving humanity towards extinction.

 

But I don't agree with the implied understanding that every system associated with our current path to extinction needs to be removed in order for human extinction to be prevented. I believe that certain aspects within the system can be changed, while larger high-level systems can remain the same. And we can still solve the problem of human extinction this way. For example, Elon Musk trying to get humans to mars is reliant on technology, but it is also a hedge against human-extinction at the same time.


I know I wrote a lot, but I love that you're making me question assumptions I've made that I've never even thought to question.

Maybe I don't currently have the base-knowledge necessary to create helpful high-level plans. But I'm not sure how I can better use my time relative to helping prevent human extinction.

I'll edit this document with what we've talked about here in mind and see what I can do to improve this post.

Thank you! Writing is not my strong suit, but I'm quite confident about the ideas. I've written a lot, so it's alright if you don't want to engage with all of it. No pressure!

I should explain the thing about suffering better:
We don't suffer from the state of the world, but from how we think about it. This is crucial. When people try to improve other peoples happiness, they talk about making changes to reality, but that's the least effective way they could go about it.
I believe this is even sufficient. That we can enjoy life as it is now, without making any changes to it, by simply adopting a better perspective on things.

For example, inequality is a part of life, likely an unavoidable one (The Pareto principle seems to apply in every society no matter its type). And even under inequality, people have been happy, so it's not even an issue in itself. But now we're teaching people in lower positions that they're suffering from injustice, that they're pitiful, that they're victims, and we're teaching everyone else that life could be a paradise, if only evil and immoral influences weren't preventing it. But this is a sure way to make people unhappy with their existence. To make them imagine how much better things could be, and make comparisons between a naive ideal and reality. Comparison is the thief of joy, and most people are happy with their lot unless you teach them not to be.
Teaching people about suffering doesn't cause it per se, but if you make people look for suffering, they will find it. If you condition your perception to notice something unpleasant, you will see it everywhere. Training yourself to notice suffering may have side-effects. I have a bit of tinnitus, and I got over it by not paying it any attention. It's only like this that my mind will start to filter it away, so that I can forget about it.

The marketing perspective

I don't think you need pain to motivate people to change, the carrot is as good at the stick. But you need one of the two at minimum (curiousity and other such drives make you act naturally, but do so by making it uncomfortable not to act and rewarding to act)
I don't think that suffering is bearable because of reward itself, but because of perceived value and meaning. Birth is really painful, but the event is so meaningful that the pain becomes secondary. Same for people who compete in the olympics, they have found something meaningful enough that a bit of physical pain is a non-issue.
You can teach this to people, but it's hard to apply. It's better to help them avoid the sort of nihilism which makes them question whether things are worth it. I think one of the causes of modern nihilism is a lack of aesthetics. 

My 2nd perspective

I don't think understanding translates directly into power. It's a common problem to think "I know what I should be doing, but I can't bring myself to do it". If understanding something granted you power over it, I'd practically be a wizard by now.
You can shift the problem that people attack, but if they have actual problems which put them in danger, I think their focus should remain on these. You can always create dissatisfaction by luring them towards better futures, in a way which benefits both them and others at the same time.

I'm never motivated by moral arguments, but some self-help books are alluring to me because they prey on my selfishness in a healthy manner which also demands responsibility and hard work.

As for the third possibility, that sounds a bit pessimistic. But I don't think it would be a worthless outcome as long as the image of what could be isn't a dangerous delusion. Other proposed roads to happiness include "Destroy your ego", "Be content with nothing", "Eat SSRIs forever", and various self-help which asks you to "hustle" and overwork.

who want to prevent human extinction

I see! That something deeper than preventing suffering. I even think that there's some conflicts between the two goals. But motivating people towards this should be easier since they're preventing their own destruction as well, and not just helping other people.

it is difficult to know what the far-reaching consequences of this hypothetical world would be

It really is. But it's interesting to me how both of us haven't used this information to decrease our own suffering. It's like I can't value things if they come too easy, and like I want to find something which is worth my suffering. 
But we can agree that wasted suffering is a thing. That state of indecision, being unable to either die or live, yield or fight back, fix the cause of suffering or come to terms with it.
The scarcity mindset is definitely a problem, but many resources are limited. I think a more complex problem would be that people tend to look for bad actions to avoid, rather than positive actions to adopt. It's all "we need to stop doing X" and "Y is bad" and "Z is evil". It's all about reduction, restrictions, avoidance. It simply chokes us. Many good people trap themselves with excessive limitations and become unable to move freely. To simply use positives likes "You should be brave", "You should stand up for what you believe in", "You should accept people for who they are" would likely help improve this problem.

there are certain important aspects of human psychology that I'm still unsure about

I think pain and such are thresholds between competing things. If I'm tired and hungry, whether or not I will cook some food depends on which of the two cause the greatest discomfort.
When procrastinating I've also found that deadlines helped me. Once I was backed into a corner and had to take action, I suddenly did. I ran away for as long as I could. The stress from deadlines might also result in dopamine and adrenaline, which help in the short term.
"Acceptance of suffering" is a bit ambigious. Accepting something usually reduces the suffering it causes, and accepting suffering lessens it too. But one can get too used to suffering, which makes them wait too long before they change anything, like the "This is fine" meme or the boiling frog that I mentioned earlier

Spread logical decisionmaking

Logic can defend against mistakes caused by logic, but we did not destroy ourselves in the past when we were less logical than now. I also don't think that logic reduces suffering. Many philosophers have been unhappy, and many people with down syndrome are all smiles. Less intelligent people often have a sort of wisdom about them, often called "street smarts" when observed, but I think that the lack of knowledge leads them to make less map-territory errors. They're nearer to reality because they have less knowledge which can mislead them.

I personally think that intellect past a certain level gives humans the ability to deliberately manipulate their suffering

I don't think any human being is intelligent enough to do this (Buddha managed, but the method was crude, reducing not only suffering). What we can do, is manipulate our reward systems. But this leaves us feeling empty, as we cannot fake meaning. Religion basically tells us to live a good life according to a fixed structure, and while most people don't like this lack of freedom, it probably leads to more happiness in the long run (for the same reason that neuroticism and conscientiousness are inversely correlated)

since human meaning is constructed by our psychological systems

Yes, the philosophical question of meaning, and the psychology of meaning are different. To solve meaninglessness by proving external meaning (this is impossible, but lets assume you could), is like curing depression by arguing that one should be happy. Meaning is basically investment, engagement, and involvement in something which feels like it has substance.

I recommend just considering humanity as a set of axioms. Like with mathematical axioms, this gives us a foundation. Like with mathematics, it doesn't matter that this foundation is arbitrary, for no "absolute" foundation can exist (in other words, no set of axioms are more correct than any other. Objectivity does not exist, even in mathematics, everything is inherently relative).
Since attemping to prove axioms is silly, considering human nature (or yourself) as sets of axioms allows you not to worry about meaning and values anymore. If you want humanity to survive, you no longer have to justify this preference.

Maybe you can point me to a source of information that will help me see your perspective on this? 

That would be difficult as it's my own conclusion. But do you know this quote by Taleb?
"I am, at the Fed level, libertarian;
at the state level, Republican;
at the local level, Democrat;
and at the family and friends level, a socialist."
The smaller the scope, the better. The reason stupid people are happier than smart people is because their scope of consideration is smaller. Being a big fish in a small pond feels good, but increase your scope of comparison to an entire country, and you become a nobody. Politics makes people miserable because the scope is too big, it's feeding your brain with problems that you have no possibility of solving by yourself. "Community" is essential to human well-being because it's cohersion on a local level. "family values" are important for the same reason. Theres more crime in bigger cities than smaller ones. Smaller communities have less crazy behaviour, they're more down-to-earth. A lot of terrible things emerge when you increase the scale of things.
Multiple things on a smaller scale does not seem to have a cost. One family can have great coherence. You can have 100 families living side by side, still great. But force them all to live together in one big house, and you will notice the cost of centralization. You will need hierarchies, coordination, and more rules. This is similar to urbanization. It's also similar to how the internet went from being millions of websites, to becoming a few 100 popular websites. It's even similar to companies merging into giants that most people consider evil.
An important antidote is isolation (gatekeeping, borders, personal boundaries, independence, seperation of powers, the single-responsibility-principle, live and let live philosophies, privacy and other rights, preservation).
I wish it was just "reduced efficiency" which was the problem. And sadly, it seems that they optimal way to increase the efficiency between many things is simply to force them towards similarity. For society, this means the destruction of different cultures, the destruction of different ways of thinking, the destruction of different moralities and different social norms.

I presume you're referring to management claiming credit

It's much more abstract than that. The amount of countries, brands, languages, accents, standards, websites, communities, religious, animals, etc. are all decreasing in numbers. All slowly tending towards 1 thing having monopoly, with this 1 thing being the average of what was merged.

Don't worry if you don't get last few points. I've tried to explain them before, but I have yet to be understood.

I wonder about what basis/set of information you're using to make these 3 claims?

Once a moloch problem has been started, you "either join or die", like you said. But we can prevent moloch problems from occuring in the first place, by preventing the world from becoming legible enough. For this idea, I was inspired by "Seeing like a state" and this

There's many prisoners-dilemma like situations in society, which do not cause problems simply because people don't have enough information to see them. If enough people cannot see them, then the games are only played by a few people. But that's the only solution to Moloch: Collectively agree not to play (or, I suppose, never stop playing in the first place). The amount of moloch-like problems has increased as a side-effect of the increased accessibility of information. Dating apps ruined dating by making it more legible. As information became more visible, and people had more choices and could make more informed decisions, they became less happy. The hidden information in traditional dating made it more "human", and less materialistic as well. Since rationalists, academics and intellectuals in general want to increase the openness of information and seem rather naive about the consequences, I don't want to become either. 

I agree with the factors leading to human extinction. My solution is "go back". This may not be possible, and like you say, we need to use intelligence and technology to go forwards instead. But like the alignment problem, this is rather difficult. I haven't even taught myself high-level mathematics, I've noticed all this through intuition alone.
I think letting small disasters happen naturally could help us prevent black-swan like events. Just like burning small patches of trees can prevent large forest fires. Humanity is doing the opposite. By putting all its eggs in one basket and making things "too big to fail", we make sure that once a disaster happens, it hits hard.

Related to all of this: https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/ (the page mentions black swan risks, Taleb, Ribbonfarms, legibility and centralization). I actually had most of these thought before I knew about this page, so that gives me some confidence that I'm not just connecting unrelated concepts like a schizophrenic.

My argumentation is a little messy, but I don't want to invest my life in understanding this issue or anything. Kaczynski's books have a few overlapping arguments with me, and the other books I know are even more crazy, so I can't recommend them. 

But maybe I'm just worrying over nothing. I'm extrapolating things as linear or exponential, but they may be s-shaped or self-correcting cycles. And any partial collapse of society will probably go back to normal or even bring improvements with it in the long run. A lot of people have ruined themselves worrying over things which turned out just fine in the end.