Wiki Contributions

Comments

Sorted by
Oxidize20

Thanks for making a well-thought out comment. It's really helpful for me to have an outside perspective from another intelligent mind.

I'm hoping to learn more from you, so I'm going to descend into a way of writing that assumes we have a lot of the same beliefs/understandings about the world. So if it gets confusing, I apologize for not being able to communicate myself more clearly.



Your 1st point:
This is an interesting perspective shift. The concept that by endeavoring to help people understand suffering, I would be causing suffering itself, since I'd be creating an expectation/ideal that virtually no one in society has met. I agree with this point, and I have a lot of perspectives on this. I'm curious about what you think of them.

My 1st perspective:  The marketing perspective.

Since I want people to change, I want them to be in pain or suffer. Since without pain, the motivation to change is typically weak or non-existent. As cruel an non-idealistic as it may be, I don't endeavor to be an idealist. I endeavor to be realist and rationalist. People have to feel pain in order to take actions necessary to better their lives. No one would imply that a farmer shouldn't put in the work to plant crops. The village's desire to eat, is of greater importance than the farmer's desire to feel physically comfortable.

I think that if I were trying to cater to idealist perspectives, I would need to emphasize suffering being bearable when in pursuit of reward. On some level, I understand how this process works in my own psychology and have been able to manipulate it, but my ability to help other people do the same thing remains uncertain. I think that there are multiple other sources of reward that can be used for this purpose other than goal-progress oriented reward, (Such as food, media entertainment, or social) but I agree that showing people that they are making clear progress will be very beneficial for increasing the acceptability of a person's suffering in this context.

My 2nd perspective: Which subsets of people will experience increased suffering as a result of the implementation of the ideas expressed in this document?

I don't think this would negatively impact people who are currently already unsatisfied with their life. Since it would just shift their focus from their current ideals/unfulfilled desires to the new one that's being presented. 

(Except this time, since they understand the systems at play in their mind and in their environment, they will actually have the power to change things)

So people won't need to rely on a large set of trial & error + heuristics to perceive that they've reached their ideal. 

I could be wrong on the aforementioned point. If instead of shifting focuses, a person adds this to a long subconscious list of things they should be that they are not, it could increase suffering even in people who are already dissatisfied with life. If testing reveals that this is the case, I think that unraveling sets of beliefs about what a person should be will help alleviate this suffering. (This would be done by utilizing marketing tactics such as storytelling)

but for subsets of people who are currently satisfied with their life, I don't think that introducing new ideals will increase suffering. There are lots of current societal ideals/expectations that re near-impossible to live up to. If this subset of people are satisfied even in spite of the external ideals/expectations they are unable to embody, I see no reason to believe that introducing a new ideal would cause this subset of people to suffer.

Although It could be argued that the reason these people are satisfied is because they've fooled themselves into believing they are meeting those impossible societal ideals/expectations. Even if this is argued, I think they would just continue to fool themselves by using the mental processes that have reduced their suffering in the past.

 

My 3rd perspective: Worst case scenario:

I have some critically inaccurate belief about the world/other people. And in reality, the only thing I'll be able to do is show people how life could be better without actually being able to get anyone to change in this way.


Your 2nd point

Completely agree. It's good that you caught me on this point, because in reflection, if I don't clarify the core of the problem more clearly, the problem could easily be mispercieved as something far more simple than is necessary to actually improve people's lives. I almost fell into the"One man's utopia is another man's hell" archetype. I think I'm just unaware about how to implement this without getting too deep into psychology and a lot of interrelated concepts.

If I was solely speaking to an audience that had your level of understanding of human psychology as it relates to suffering, then I could immediately begin clarifying my position on deeper concepts related to suffering.

But since the average person doesn't have a concept of suffering as an isolated concept from pain, I think that it could be difficult to help people make that conceptual jump.

I do intend to make a few separate versions intended for different target audience levels of intellect, so I might be able to solve this problem with my implementation here.


Your 3rd point
In retrospect, I realize that I was very liberal with my usage of the term "EA", and I made no effort to clarify what I meant by my usage of the term. Just to be sure we're on the same page, When I say "We" in "we are products of EA", I'm referring to a hypothetical group of people who want to prevent human extinction. (The target audience of this post). I definitely don't mean to imply that another person or group's altruistic deliberations had anything to do with our current beliefs or abilities. I'm also not referring to any organization or group of people when I say "EA", What I mean is the broader state of altruistic tendencies among intellectuals/rational people as a movement.

I realize now that I made an unjustified assumption that efforts for the prevention of human extinction would have altruistic motivations. And as you've pointed out, I may be wrong.


"Would reducing suffering really be a good thing?"

 I'm thinking about this question. And I realize I'm not actually qualified to answer it.

From a humanitarian or idealist perspective, obviously we should reduce pointless suffering and help people gain the tools necessary to deal with or accept suffering. But from a complexity or future-prediction perspective, it is difficult to know what the far-reaching consequences of this hypothetical world would be. If people know how to consciously reduce their own suffering and engineer their environments in a way that makes life more satisfactory for them, the results could be akin to a sort of malignant idea virus. As we've both said, without suffering, there is no change, By democratizing these tools, we could be erasing a core and crucial element of human change. Since choosing suffering as opposed to satisfaction is completely contradictory to human nature, once a person receives the information, they could be forever changed for the worse.


I also grew because of my suffering. And In this post, I meant to say that I was the most productive when I didn't eat. Not eating only significantly reduced my productivity when how much I ate was outside of my conscious control.

Suffering has made me into the person I am, and I am an avid believer in the pro-social benefits of personal suffering. But it is still true that unacceptable suffering outside of an individual's control is what causes the scarcity mindset. (Eradicating the scarcity mindset & increasing the prevalence of altruistic perspectives is the whole point of reducing human suffering in the context of preventing human extinction.) 

Additionally, there are certain important aspects of human psychology that I'm still unsure about. For example, to what degree does pain as a stimulus cause change? To what degree does suffering cause change?  To what degree does an individual's acceptability of suffering affect change?

And to the 3 aforementioned questions, in what way?


Your 4th point

My fault for not better clarifying my perspective here.

I'm claiming that even if scarcity mindsets fade and altruism spreads, we will still go extinct if the average person is as dumb as I am. "Create a community", and "Spread logical decisionmaking" are complementary to the concept of reducing human suffering, but their ultimate purpose is the continued existence of the human species. They are 2 of 3 points aimed at managing humanity's likelihood of destroying itself with some form of technology.

I personally think that intellect past a certain level gives humans the ability to deliberately manipulate their suffering, but I think that on the spectrum with which every human I'm aware of exists lies, intellect does not seem to make suffering controllable in any non-proxy way. In other words, I don't think that tech or intellect are strong predictors relative to suffering at our current capabilities for tech and intellect.

To respond to your point on anything other than survival & reproduction being a fabricated problem, I would agree. I feel like you can always meta-contradict the concept of meaning, since human meaning is constructed by our psychological systems, rather than by any universal truth. We can argue that pain is bad, and pleasure is good. Only to realize that those two concepts are only proxies for what we actually optimize for, suffering and satisfaction. So then we can argue that suffering is bad, and satisfaction is good. Only to gain perspective on the concept of agency, and to realize that suffering and satisfaction are just mesa-optimizers for evolution. I believe this process can go on an on, but the reality is that we are still mesa-optimizers, and so we have a natural inclination towards certain things and an intuition to call it meaningful. Anyone can take a nihilistic perspective an argue that nothing is meaningful, but if nothing is meaningful, then there is also no point in doing nothing, and we can do whatever we want. So I think the concept of meaning is a sort of recursive argument that we don't have the intellectual tools to solve. So rather than do nothing, I think we should do what we can.
 
Your 5th point (Bureaucracy)
I don't know anything about how decentralization can help with the problem of bureaucracy. Maybe you can point me to a source of information that will help me see your perspective on this? 

I'm also interested in your perspective on a few entities within a bureaucracy hogging all the resources. I presume you're referring to management claiming credit or capital distributions received by the owner class?

I look at bureaucracy from a business perspective. They call it operational complexity, and it refers to the reduced level of control of the founder over a business as the organization/tasks get more complex.

As the founder loses control over incentive structures, hiring practices, and training, the impact of the founder's competence dilutes and regresses towards the mean of the average human. 

It also refers to increases in tasks that are not the actual revenue-producing activity. An example would be salesman filling out excel sheets instead of talking with prospects.

Another example would be the increased training time required for adding an additional task to a specific role, and the increased training time for the people who are training those people, and the butterfly effect that has on the entirety of the business.

Also, the increased need for competent problem-solving as old systems break, and new systems (Or slight variations of old systems) become necessary.

My shortest definition on how I see Bureaucracy = reduced efficiency in multiple ways for multiple reasons as organizations grow larger.


Your 6th point (Moloch)
From my understanding, Moloch = a more complex representation of the prisoner's dilemma. 

The state of everything growing worse as a result of competition & a lack of control resulting from the fact that you either pool resources in a locally helpful but globally harmful way, or you seize to exist.
 

Your take: "Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech."

I wonder about what basis/set of information you're using to make these 3 claims? I am currently unable to respond in a productive way without further context.


"Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards."

And I may have misunderstood your point here, but from my understanding you're arguing something like: "Why aren't we in a state of perfect Moloch in present society?" (I'm not sure what you mean by "optimal strategies weren't visible".)  And when you say going back instead of forwards, you seem to be implying that the solution to preventing human extinction as it relates to tech is simply to remove technology.

Which in my opinion, will inevitably lead future generations back to this exact point in technological capabilities, and they might not also decide to regress technological capabilities since their society would be vastly different than our own in terms of culture-based differences such as values & identity. I don't see the reasoning behind pushing the problem forward onto later generations as opposed to attempting to solve the problem for good.

It'd also be great if you could point me to a piece of writing on world legibility and transparency. Since I don't currently have the context with which to understand what those two things mean.




Your final point

What's leading humanity to extinction in my opinion: 
1. Increased technological capabilities
2. Our lack of ability to control the impacts of these capabilities

3. our lack of incentive to manage those capabilities.

4. Our lack of control over the first 3 points.

 

I agree that my proposed solutions for the problems we face are reliant on systems currently driving humanity towards extinction.

 

But I don't agree with the implied understanding that every system associated with our current path to extinction needs to be removed in order for human extinction to be prevented. I believe that certain aspects within the system can be changed, while larger high-level systems can remain the same. And we can still solve the problem of human extinction this way. For example, Elon Musk trying to get humans to mars is reliant on technology, but it is also a hedge against human-extinction at the same time.


I know I wrote a lot, but I love that you're making me question assumptions I've made that I've never even thought to question.

Maybe I don't currently have the base-knowledge necessary to create helpful high-level plans. But I'm not sure how I can better use my time relative to helping prevent human extinction.

I'll edit this document with what we've talked about here in mind and see what I can do to improve this post.

Oxidize10

The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction

Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I'm still largely ignorant of the psychological nuances of the average LW reader.

Like you implied, I did have a narrow audience in mind, and I assumed that LW's algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my post would create the motivation for readers to gain further context through reading links/footnotes for information gaps between me and readers, which seems to be wrong.

For my first sentence where I make the assumption that we subconciously optimize for 

I make a few assumptions.

1. The readers definition of subconscious is the same as my definition. Which is a large leap considering my definition was created through a lot of introspection & self-examination through personal experiences.
2. I assume that if we have the same definition of "subconcious", my claim is self-explanatory. 

Ex 1. When a cancer patient is told they have 5 years left to live, they are heartbroken and their entire lives change.

Ex 2. When an AI researcher predicts 5 years left until the advent of AGI/ASI and the subsequent extinction of the human race, no one takes this seriously.

CstineSublime, if I were to redo this post, do you think I should explain this claim using an example like this instead of making the assumption that readers will automatically make this connection in their mind?


P.S the  footnotes were meant to be definitions similar to what you see when you hover over a complex word on Wikipedia. Let me know if that wasn't clear.

Oxidize10

Thanks for commenting.

I didn't include the contents in the link because I thought it would make the post too long and I thought it had a different main idea, so I figured it would be better if I made two separate posts. I can't because of the automatic rate-restriction, but maybe maybe it would've been a better post if I included the contents of the linked doc in the post itself. 

I'm realizing that I'm packing an unusually large amount of information within a single post, and I only attempt to fill gaps in information with links & footnotes that will take a significant amount of time to read, and I made little effort to give readers the motivation to read them. 

In  my next post, I'll try to give a better reason for reading & I'll be more thorough clarifying my positions & claims.

I also re-read the comment you're referring to from the perspective of if someone else had written it, and I see what you mean. I Edited it to "Currently in early phases, so forgive me for linking to a series of incomplete thoughts". Hopefully that sets expectations low without appearing arrogant or condescending.

Oxidize20

Oh. I linked the wrong thing. I would down vote this too. Sorry about setting an expectation and then not fulfilling it.

Edit: I fixed the link at the end of the post.

It sucks that I have to wait a week before posting anything again though because I made a simple mistake. I guess I'll just have to hope I don't mess up again in the future.

Oxidize10

I'm new to LW.  Why was this post downvoted? How can I make better posts in the future? https://www.lesswrong.com/posts/n7Fa63ZgHDH8zcw9d/we-can-survive

Oxidize10

Could I get some constructive criticism about why I'm being downvoted? It would be helpful for the sake of avoiding the same mistakes in the future.

Oxidize10

Correct. It lacks tactical practicality right now, but I think that from a macro-directional perspective, it's sensible to align all of my current actions to that end goal. And I believe there is a huge demand among business minded intellectuals and ambitious people for a community like this to be created.

Oxidize20

AI isn't really new technology though, right? Do you have evidence of alarmists around AI in the past?

And do you have anecdotes of intelligent/rational people being alarmist about a technology that turned out to be false?

I think these pieces of evidence/anecdotes would strengthen your argument.

What is your estimated timeline for humanity's extinction if it continues on its current path?

What information are you using for the foundation of your beliefs around the progress of science & technology?

Oxidize30

How do you think competent people can solve this problem within their own fields of expertise? 

For example, the EA community is a small & effective community like you've referenced for commonplace charity/altruism practices. 

How could we solve the median researcher problem & improve the efficacy & reputation of altruism as a whole?

Personally, I suggest taking a marketing approach. If we endeavor to understand important similarities between "median researchers", so that we can talk to them in the language they want to hear, we may be able to attract attention from the broader altruism community which can eventually be leveraged to place EA in a position of authority or expertise.

What do you think?

Oxidize10

What do you mean by red flag? Red flag on the author's side? If so, I don't understand your sentiment here.
Partisan issues exist.

Load More